deepen-plan
Deepen Plan - Power Enhancement Mode
Section titled “Deepen Plan - Power Enhancement Mode”Introduction
Section titled “Introduction”Note: The current year is 2026. Use this when searching for recent documentation and best practices.
This command takes an existing plan (from /workflows:plan) and enhances each section with parallel research agents. Each major element gets its own dedicated research sub-agent to find:
- Best practices and industry patterns
- Performance optimizations
- UI/UX improvements (if applicable)
- Quality enhancements and edge cases
- Real-world implementation examples
The result is a deeply grounded, production-ready plan with concrete implementation details.
Plan File
Section titled “Plan File”<plan_path> #$ARGUMENTS </plan_path>
If the plan path above is empty:
- Check for recent plans:
ls -la docs/plans/ - Ask the user: “Which plan would you like to deepen? Please provide the path (e.g.,
docs/plans/2026-01-15-feat-my-feature-plan.md).”
Do not proceed until you have a valid plan file path.
Main Tasks
Section titled “Main Tasks”1. Parse and Analyze Plan Structure
Section titled “1. Parse and Analyze Plan Structure”Read the plan file and extract:
- Overview/Problem Statement
- Proposed Solution sections
- Technical Approach/Architecture
- Implementation phases/steps
- Code examples and file references
- Acceptance criteria
- Any UI/UX components mentioned
- Technologies/frameworks mentioned (Rails, React, Python, TypeScript, etc.)
- Domain areas (data models, APIs, UI, security, performance, etc.)
Create a section manifest:
Section 1: [Title] - [Brief description of what to research]Section 2: [Title] - [Brief description of what to research]...2. Discover and Apply Available Skills
Section titled “2. Discover and Apply Available Skills”Step 1: Discover ALL available skills from ALL sources
# 1. Project-local skills (highest priority - project-specific)ls .opencode/skills/
# 2. User's global skillsls ~/.config/opencode/skills/
# 3. List bundled skills from systematic pluginsystematic list skillsImportant: Check EVERY source. Use skills from ANY installed plugin that’s relevant.
Step 2: For each discovered skill, read its SKILL.md to understand what it does
# For each skill directory found, read its documentationcat [skill-path]/SKILL.mdStep 3: Match skills to plan content
For each skill discovered:
- Read its SKILL.md description
- Check if any plan sections match the skill’s domain
- If there’s a match, spawn a sub-agent to apply that skill’s knowledge
Step 4: Spawn a sub-agent for EVERY matched skill
CRITICAL: For EACH skill that matches, spawn a separate sub-agent and instruct it to USE that skill.
For each matched skill:
Task general-purpose: "You have the [skill-name] skill available at [skill-path].
YOUR JOB: Use this skill on the plan.
1. Read the skill: cat [skill-path]/SKILL.md2. Follow the skill's instructions exactly3. Apply the skill to this content:
[relevant plan section or full plan]
4. Return the skill's full output
The skill tells you what to do - follow it. Execute the skill completely."Spawn ALL skill sub-agents in PARALLEL:
- 1 sub-agent per matched skill
- Each sub-agent reads and uses its assigned skill
- All run simultaneously
- 10, 20, 30 skill sub-agents is fine
Each sub-agent:
- Reads its skill’s SKILL.md
- Follows the skill’s workflow/instructions
- Applies the skill to the plan
- Returns whatever the skill produces (code, recommendations, patterns, reviews, etc.)
Example spawns:
task: "Load the systematic:agent-native-architecture skill and apply it to: [agent/tool sections of plan]"
task: "Load the systematic:brainstorming skill and apply it to: [sections needing design exploration]"
task: "Load the systematic:compound-docs skill and search for relevant documented solutions for: [plan topic]"No limit on skill sub-agents. Spawn one for every skill that could possibly be relevant.
3. Discover and Apply Learnings/Solutions
Section titled “3. Discover and Apply Learnings/Solutions”LEARNINGS LOCATION - Check these exact folders:
docs/solutions/ <-- PRIMARY: Project-level learnings (created by /workflows:compound)├── performance-issues/│ └── *.md├── debugging-patterns/│ └── *.md├── configuration-fixes/│ └── *.md├── integration-issues/│ └── *.md├── deployment-issues/│ └── *.md└── [other-categories]/ └── *.mdStep 1: Find ALL learning markdown files
Run these commands to get every learning file:
# PRIMARY LOCATION - Project learningsfind docs/solutions -name "*.md" -type f 2>/dev/null
# If docs/solutions doesn't exist, check alternate locations:find .opencode/docs -name "*.md" -type f 2>/dev/nullStep 2: Read frontmatter of each learning to filter
Each learning file has YAML frontmatter with metadata. Read the first ~20 lines of each file to get:
---title: "N+1 Query Fix for Briefs"category: performance-issuestags: [activerecord, n-plus-one, includes, eager-loading]module: Briefssymptom: "Slow page load, multiple queries in logs"root_cause: "Missing includes on association"---For each .md file, quickly scan its frontmatter:
# Read first 20 lines of each learning (frontmatter + summary)head -20 docs/solutions/**/*.mdStep 3: Filter - only spawn sub-agents for LIKELY relevant learnings
Compare each learning’s frontmatter against the plan:
tags:- Do any tags match technologies/patterns in the plan?category:- Is this category relevant? (e.g., skip deployment-issues if plan is UI-only)module:- Does the plan touch this module?symptom:/root_cause:- Could this problem occur with the plan?
SKIP learnings that are clearly not applicable:
- Plan is frontend-only → skip
database-migrations/learnings - Plan is Python → skip
rails-specific/learnings - Plan has no auth → skip
authentication-issues/learnings
SPAWN sub-agents for learnings that MIGHT apply:
- Any tag overlap with plan technologies
- Same category as plan domain
- Similar patterns or concerns
Step 4: Spawn sub-agents for filtered learnings
For each learning that passes the filter:
Task general-purpose: "LEARNING FILE: [full path to .md file]
1. Read this learning file completely2. This learning documents a previously solved problem
Check if this learning applies to this plan:
---[full plan content]---
If relevant:- Explain specifically how it applies- Quote the key insight or solution- Suggest where/how to incorporate it
If NOT relevant after deeper analysis:- Say 'Not applicable: [reason]'"Example filtering:
# Found 15 learning files, plan is about "Rails API caching"
# SPAWN (likely relevant):docs/solutions/performance-issues/n-plus-one-queries.md # tags: [activerecord] ✓docs/solutions/performance-issues/redis-cache-stampede.md # tags: [caching, redis] ✓docs/solutions/configuration-fixes/redis-connection-pool.md # tags: [redis] ✓
# SKIP (clearly not applicable):docs/solutions/deployment-issues/heroku-memory-quota.md # not about cachingdocs/solutions/frontend-issues/stimulus-race-condition.md # plan is API, not frontenddocs/solutions/authentication-issues/jwt-expiry.md # plan has no authSpawn sub-agents in PARALLEL for all filtered learnings.
These learnings are institutional knowledge - applying them prevents repeating past mistakes.
4. Launch Per-Section Research Agents
Section titled “4. Launch Per-Section Research Agents”For each identified section, launch parallel research:
Task Explore: "Research best practices, patterns, and real-world examples for: [section topic].Find:- Industry standards and conventions- Performance considerations- Common pitfalls and how to avoid them- Documentation and tutorialsReturn concrete, actionable recommendations."Also use Context7 MCP for framework documentation:
For any technologies/frameworks mentioned in the plan, use Context7 (if available) to query library documentation for specific patterns and best practices.
Use WebSearch for current best practices:
Search for recent (2024-2026) articles, blog posts, and documentation on topics in the plan.
5. Discover and Run ALL Review Agents
Section titled “5. Discover and Run ALL Review Agents”Step 1: Discover ALL available agents from ALL sources
# 1. Project-local agents (highest priority - project-specific)find .opencode/agents -name "*.md" 2>/dev/null
# 2. User's global agentsfind ~/.config/opencode/agents -name "*.md" 2>/dev/null
# 3. List bundled agents from systematic pluginsystematic list agentsImportant: Check EVERY source. Include agents from:
- Project
.opencode/agents/ - User’s
~/.config/opencode/agents/ - Systematic plugin bundled agents (review/, research/, design/ categories)
- SKIP:
agents/workflow/*(these are workflow orchestrators, not reviewers)
Step 2: For each discovered agent, read its description
Read the first few lines of each agent file to understand what it reviews/analyzes.
Step 3: Launch ALL agents in parallel
For EVERY agent discovered, launch a Task in parallel:
Task [agent-name]: "Review this plan using your expertise. Apply all your checks and patterns. Plan content: [full plan content]"CRITICAL RULES:
- Do NOT filter agents by “relevance” - run them ALL
- Do NOT skip agents because they “might not apply” - let them decide
- Launch ALL agents in a SINGLE message with multiple Task tool calls
- 20, 30, 40 parallel agents is fine - use everything
- Each agent may catch something others miss
- The goal is MAXIMUM coverage, not efficiency
Step 4: Also discover and run research agents
Research agents (like best-practices-researcher, framework-docs-researcher, git-history-analyzer, repo-research-analyst) should also be run for relevant plan sections.
6. Wait for ALL Agents and Synthesize Everything
Section titled “6. Wait for ALL Agents and Synthesize Everything”Collect outputs from ALL sources:
- Skill-based sub-agents - Each skill’s full output (code examples, patterns, recommendations)
- Learnings/Solutions sub-agents - Relevant documented learnings from /workflows:compound
- Research agents - Best practices, documentation, real-world examples
- Review agents - All feedback from every reviewer (architecture, security, performance, simplicity, etc.)
- Context7 queries - Framework documentation and patterns
- Web searches - Current best practices and articles
For each agent’s findings, extract:
- Concrete recommendations (actionable items)
- Code patterns and examples (copy-paste ready)
- Anti-patterns to avoid (warnings)
- Performance considerations (metrics, benchmarks)
- Security considerations (vulnerabilities, mitigations)
- Edge cases discovered (handling strategies)
- Documentation links (references)
- Skill-specific patterns (from matched skills)
- Relevant learnings (past solutions that apply - prevent repeating mistakes)
Deduplicate and prioritize:
- Merge similar recommendations from multiple agents
- Prioritize by impact (high-value improvements first)
- Flag conflicting advice for human review
- Group by plan section
7. Enhance Plan Sections
Section titled “7. Enhance Plan Sections”Enhancement format for each section:
## [Original Section Title]
[Original content preserved]
### Research Insights
**Best Practices:**- [Concrete recommendation 1]- [Concrete recommendation 2]
**Performance Considerations:**- [Optimization opportunity]- [Benchmark or metric to target]
**Implementation Details:**```[language]// Concrete code example from researchEdge Cases:
- [Edge case 1 and how to handle]
- [Edge case 2 and how to handle]
References:
- [Documentation URL 1]
- [Documentation URL 2]
### 8. Add Enhancement Summary
At the top of the plan, add a summary section:
```markdown## Enhancement Summary
**Deepened on:** [Date]**Sections enhanced:** [Count]**Research agents used:** [List]
### Key Improvements1. [Major improvement 1]2. [Major improvement 2]3. [Major improvement 3]
### New Considerations Discovered- [Important finding 1]- [Important finding 2]9. Update Plan File
Section titled “9. Update Plan File”Write the enhanced plan:
- Preserve original filename
- Add
-deepenedsuffix if user prefers a new file - Update any timestamps or metadata
Output Format
Section titled “Output Format”Update the plan file in place (or if user requests a separate file, append -deepened after -plan, e.g., 2026-01-15-feat-auth-plan-deepened.md).
Quality Checks
Section titled “Quality Checks”Before finalizing:
- All original content preserved
- Research insights clearly marked and attributed
- Code examples are syntactically correct
- Links are valid and relevant
- No contradictions between sections
- Enhancement summary accurately reflects changes
Post-Enhancement Options
Section titled “Post-Enhancement Options”After writing the enhanced plan, use the question tool to present these options:
Question: “Plan deepened at [plan_path]. What would you like to do next?”
Options:
- View diff - Show what was added/changed
- Run
/plan_review- Get feedback from reviewers on enhanced plan - Start
/workflows:work- Begin implementing this enhanced plan - Deepen further - Run another round of research on specific sections
- Revert - Restore original plan (if backup exists)
Based on selection:
- View diff → Run
git diff [plan_path]or show before/after /plan_review→ Call the /plan_review command with the plan file path/workflows:work→ Call the /workflows:work command with the plan file path- Deepen further → Ask which sections need more research, then re-run those agents
- Revert → Restore from git or backup
Example Enhancement
Section titled “Example Enhancement”Before (from /workflows:plan):
## Technical Approach
Use React Query for data fetching with optimistic updates.After (from /workflows:deepen-plan):
## Technical Approach
Use React Query for data fetching with optimistic updates.
### Research Insights
**Best Practices:**- Configure `staleTime` and `cacheTime` based on data freshness requirements- Use `queryKey` factories for consistent cache invalidation- Implement error boundaries around query-dependent components
**Performance Considerations:**- Enable `refetchOnWindowFocus: false` for stable data to reduce unnecessary requests- Use `select` option to transform and memoize data at query level- Consider `placeholderData` for instant perceived loading
**Implementation Details:**```typescript// Recommended query configurationconst queryClient = new QueryClient({ defaultOptions: { queries: { staleTime: 5 * 60 * 1000, // 5 minutes retry: 2, refetchOnWindowFocus: false, }, },});Edge Cases:
- Handle race conditions with
cancelQuerieson component unmount - Implement retry logic for transient network failures
- Consider offline support with
persistQueryClient
References:
- https://tanstack.com/query/latest/docs/react/guides/optimistic-updates
- https://tkdodo.eu/blog/practical-react-query
NEVER CODE! Just research and enhance the plan.