claude-configs/skills/_archive/orchestrator/SKILL.md
Cal Corum 1f70264e73 Archive create-skill, orchestrator, and notediscovery skills
- create-skill: superseded by native /skill command and official docs
- orchestrator: superseded by native Agent Teams (/agent-teams)
- notediscovery: redundant with cognitive-memory MCP (superior local-first implementation)

All moved to _archive/ for historical reference.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 19:41:33 -06:00

8.3 KiB

name description
orchestrator Swarm orchestrator that decomposes tasks, delegates to coder/reviewer/validator agents, and manages the full build cycle. USE WHEN user says "/orchestrator", "use the swarm", "orchestrate this", or provides a PRD/spec for multi-file implementation.

Agent Swarm Orchestrator

Decomposes work into tasks, delegates to specialized subagents (coders, reviewers, validators), and manages the full build cycle.

Usage

/orchestrator <task description or path to PRD file or path to PROJECT_PLAN.json>

CRITICAL CONSTRAINTS

  1. DO NOT use Edit or Write. Delegate ALL implementation to coder subagents via Task.
  2. DO NOT review code yourself. Spawn reviewer subagents for every completed task.
  3. DO NOT validate yourself. Spawn a validator subagent after all reviews pass.
  4. DO NOT use sleep or poll. Task calls block until agents return. No polling needed.
  5. DO NOT use TeamCreate, TeamDelete, or SendMessage. Use plain Task calls only — they block and return results directly.
  6. ALL 6 PHASES ARE MANDATORY. Understand → Decompose → Execute → Review → Validate → Report.
  7. RUN AUTONOMOUSLY. Do NOT stop between phases to ask for confirmation.

Status Updates

Print [orchestrator] prefixed updates after every action. Never go silent.

Input Handling

  • PROJECT_PLAN.json → Read and parse. Skip Phase 1+2 (already analyzed). Convert tasks directly into waves using dependencies and priority fields. See "PROJECT_PLAN.json Mode" below.
  • File path (.md, .txt, etc.) → Read tool, then run Phase 1+2 normally
  • Inline description → use directly, run Phase 1+2 normally

Configurable Parallelism

Default: 3 coders max. User can override: "use 2 coders", "max 5 coders".

Phase 1: Understand

Print: [orchestrator] Phase 1: Understanding requirements and exploring codebase...

  1. Parse input
  2. Explore codebase with Glob, Grep, Read
  3. Identify file contention for wave grouping

Print findings.

Phase 2: Decompose

Print: [orchestrator] Phase 2: Decomposing into tasks...

  1. Create tasks via TaskCreate with descriptions, target files, acceptance criteria
  2. Set blockedBy dependencies
  3. Group into waves — independent tasks within a wave; waves are sequential

Print task table.

Phase 3: Execute

For each wave:

Print: [orchestrator] Phase 3: Executing wave N of M...

Spawn coders as multiple Task tool calls in ONE message. They run in parallel and block until all return:

Task(description: "coder-1: DB layer", subagent_type: "general-purpose", model: "sonnet", mode: "bypassPermissions", prompt: "...")
Task(description: "coder-2: CLI", subagent_type: "general-purpose", model: "sonnet", mode: "bypassPermissions", prompt: "...")

DO NOT set team_name or run_in_background. Plain Task calls block until the agent finishes and returns results.

Coder prompt template:

You are a swarm-coder agent. Read ~/.claude/agents/swarm-coder.md for your full instructions.

Your assigned task: <paste full task description>
Task ID: <id>
Working directory: <path>

When done, mark task completed with TaskUpdate and return a summary of what you did and which files you changed.

When all coders return, print results. Then immediately review this wave before starting the next wave.

Phase 4: Review (per-wave)

Print: [orchestrator] Phase 4: Reviewing wave N tasks...

Spawn reviewers as parallel Task calls (one per completed task):

Task(description: "reviewer-1: DB layer", subagent_type: "general-purpose", model: "sonnet", mode: "default", prompt: "...")
Task(description: "reviewer-2: CLI", subagent_type: "general-purpose", model: "sonnet", mode: "default", prompt: "...")

Reviewer prompt template:

You are a swarm-reviewer agent. Read ~/.claude/agents/swarm-reviewer.md for your full instructions.

Review the following completed task:
- Task description: <description>
- Files changed: <list of files>
- Working directory: <path>

Read the changed files and provide your verdict: APPROVE, REQUEST_CHANGES (with specific file:line feedback), or REJECT.

Print each verdict. If REQUEST_CHANGES: spawn coder to fix (max 2 rounds). After round 2, accept with caveats or flag for human.

After wave N review completes, proceed to wave N+1. Repeat Phase 3 + 4 for each wave.

Phase 5: Validate

Print: [orchestrator] Phase 5: Spawning validator...

Only after ALL waves and ALL reviews are done:

Task(description: "validator: spec check", subagent_type: "general-purpose", model: "sonnet", mode: "default", prompt: "...")

Validator prompt template:

You are a swarm-validator agent. Read ~/.claude/agents/swarm-validator.md for your full instructions.

Original requirements:
<paste the original spec/PRD or inline description>

Tasks completed:
<list each task with description and files>

Working directory: <path>

Check every requirement. Run tests if a test suite exists. Provide PASS/FAIL per requirement with evidence.

Print validation results. If FAIL: spawn a coder to fix or flag for human.

Phase 6: Report

Print final summary:

  • Tasks completed with status
  • Review verdicts per task
  • Validation results per requirement
  • Caveats or items needing human attention
  • Files created/modified

Execution Order (mandatory)

For each wave:
  1. Spawn coders (parallel blocking Task calls)
  2. All coders return with results
  3. Spawn reviewers for this wave (parallel blocking Task calls)
  4. All reviewers return with verdicts
  5. Handle REQUEST_CHANGES (max 2 rounds)
  6. Proceed to next wave

After ALL waves:
  7. Spawn validator (blocking Task call)
  8. Validator returns
  9. Handle any FAILs
  10. Print final report

Wave Discipline

Tasks touching overlapping files MUST be in separate waves. This prevents merge conflicts between parallel coders.

PROJECT_PLAN.json Mode

When the input path ends in PROJECT_PLAN.json, switch to plan-driven mode:

What Changes

  • Skip Phase 1 (Understand) — the plan already analyzed the codebase
  • Skip Phase 2 (Decompose) — tasks are pre-defined in the JSON
  • Phase 1 becomes: Parse Plan — read the JSON and convert to internal tasks

How to Parse

  1. Read the JSON file
  2. Filter to incomplete tasks only ("completed": false)
  3. Optionally filter by category/priority if the user specifies (e.g., "only critical", "FEAT tasks only")
  4. For each task, create a TaskCreate entry using:
    • subject: task name field
    • description: Combine description + suggestedFix + files + notes into a complete coder brief
    • blockedBy: Map dependencies array (task IDs like "CRIT-001") to internal task IDs

How to Group into Waves

  1. Build a dependency graph from the dependencies fields
  2. Tasks with no dependencies (or whose dependencies are all completed: true) → Wave 1
  3. Tasks depending on Wave 1 tasks → Wave 2
  4. Continue until all tasks are placed
  5. Within each wave, check files for overlap — split overlapping tasks into separate sub-waves
  6. Respect the coder parallelism limit (default 3, user-configurable)

Task Mapping Example

// PROJECT_PLAN.json task:
{
  "id": "FEAT-001",
  "name": "Add user authentication",
  "description": "Implement JWT-based auth...",
  "dependencies": [],
  "files": [{"path": "src/auth.py"}, {"path": "src/middleware.py"}],
  "suggestedFix": "1. Create auth module\n2. Add middleware...",
  "notes": "Use python-jose for JWT"
}

Becomes coder prompt:

Your assigned task: Add user authentication

Implement JWT-based auth...

Suggested approach:
1. Create auth module
2. Add middleware...

Target files: src/auth.py, src/middleware.py
Notes: Use python-jose for JWT

After Execution

After the full pipeline (execute → review → validate → report), update the PROJECT_PLAN.json:

  • Set "completed": true and "tested": true for tasks that passed review + validation
  • Leave completed: false for tasks that failed or were skipped

Partial Execution

The user can request a subset of tasks:

  • /orchestrator PROJECT_PLAN.json --category critical → only CRIT-* tasks
  • /orchestrator PROJECT_PLAN.json --ids FEAT-001,FEAT-002 → specific tasks
  • /orchestrator PROJECT_PLAN.json --week week1 → tasks in the weeklyRoadmap for week1

Parse these hints from the user's prompt and filter accordingly.