# Save Project Plan Command You are helping the user create a comprehensive project plan recap that will be saved to `.claude/implementation/NEXT_SESSION.md`. This is a **single file** that gets completely overwritten each session to provide the next AI agent with maximum context. ## Your Task Automatically analyze the current project state and generate a complete project plan document for the next coding session. ## Analysis Steps ### 1. Gather Git Context Run these commands to understand recent work: ```bash # Get last 10 commits with detailed info git log -10 --pretty=format:"%h - %s (%ar)" --no-merges # Get current branch and status git status --short # Get files changed in last 5 commits git diff --name-status HEAD~5..HEAD # Get current branch name git branch --show-current ``` ### 2. Read Current Plan Read `.claude/implementation/NEXT_SESSION.md` to understand: - What was planned for this session - What completion % we started at - What tasks were on the list ### 3. Read Implementation Index Read `.claude/implementation/00-index.md` to understand: - Overall project status - Which phase/week we're in - What's completed vs. pending ### 4. Analyze File Changes For the most recently modified files (from git diff), determine: - What features were implemented - What tests were added - What architectural changes were made ## Document Structure Generate a NEW `NEXT_SESSION.md` file with this exact structure: ```markdown # Next Session Plan - [Phase/Week Name] **Current Status**: [Phase] - [X]% Complete **Last Commit**: `[hash]` - "[commit message]" **Date**: [YYYY-MM-DD] **Remaining Work**: [X]% ([N] tasks) --- ## Quick Start for Next AI Agent ### 🎯 Where to Begin 1. Read this entire document first 2. Review files in "Files to Review Before Starting" section 3. Start with Task 1 in "Tasks for Next Session" 4. Run test commands after each task ### 📍 Current Context [2-3 sentences describing exactly where we are in the project and what the immediate focus is] --- ## What We Just Completed ✅ [Analyze git commits and file changes to document what was done this session] ### 1. [Feature/Component Name] - `path/to/file.py` - [What changed] - [Specific implementation details] - [N] tests passing ### 2. [Next Feature] - [Details...] [Continue for all major changes] --- ## Key Architecture Decisions Made [Extract any important architectural choices, patterns adopted, or technical decisions] 1. **[Decision Name]**: [Rationale and implications] 2. **[Decision Name]**: [Rationale and implications] --- ## Blockers Encountered 🚧 [List any issues that blocked progress or need attention] - **[Blocker Title]**: [Description, current status, what needs to be done] [If none, write "None - development proceeded smoothly"] --- ## Outstanding Questions ❓ [List unresolved questions or decisions that need to be made] 1. **[Question]**: [Context and why it matters] 2. **[Question]**: [Context and implications] [If none, write "None at this time"] --- ## Tasks for Next Session [Generate specific, actionable tasks with time estimates] ### Task 1: [Task Name] ([Time estimate]) **File(s)**: `path/to/file.py` **Goal**: [Clear description of what needs to be done] **Changes**: 1. [Specific change with code example if helpful] 2. [Next specific change] **Files to Update**: - `path/to/file1.py` - [What to change] - `path/to/file2.py` - [What to change] **Test Command**: ```bash [Exact command to verify this task] ``` **Acceptance Criteria**: - [ ] [Specific criterion] - [ ] [Specific criterion] --- ### Task 2: [Task Name] ([Time estimate]) [Same structure as Task 1] --- [Continue for all tasks - aim for 3-7 tasks] --- ## Files to Review Before Starting [List critical files the next AI agent should read with specific line ranges if relevant] 1. `path/to/file.py:45-120` - [Why this is important] 2. `path/to/file2.py` - [Context about this file] 3. `.claude/implementation/[relevant-doc].md` - [What this contains] --- ## Verification Steps After completing all tasks: 1. **Run all tests**: ```bash [Specific test commands] ``` 2. **Manual testing** (if applicable): ```bash [Terminal client commands or manual steps] ``` 3. **Database verification** (if applicable): ```sql [Specific queries to verify state] ``` 4. **Commit changes**: ```bash git add [files] git commit -m "CLAUDE: [commit message]" ``` --- ## Success Criteria [Phase/Week] will be **[X]% complete** when: - [ ] [Specific criterion with test count if applicable] - [ ] [Specific criterion] - [ ] [Specific criterion] - [ ] Documentation updated - [ ] Git commit created --- ## Quick Reference **Current Test Count**: [N] tests ([breakdown by area]) **Last Test Run**: [Status] ([Date]) **Branch**: `[branch-name]` **Python**: [version] **Virtual Env**: `backend/venv/` **Key Imports for Next Session**: ```python [List the most important imports the next agent will need] ``` **Recent Commit History** (Last 10): ``` [Insert git log output from step 1] ``` --- ## Context for AI Agent Resume **If the next agent needs to understand the bigger picture**: - Overall project: See `@prd-web-scorecard-1.1.md` and `@CLAUDE.md` - Architecture: See `@.claude/implementation/00-index.md` - Current phase details: See `@.claude/implementation/[current-phase].md` **Critical files in current focus area**: [List 5-10 most important files for the current work] **What NOT to do**: - [Any gotchas or common mistakes to avoid] - [Patterns we've decided NOT to use and why] --- **Estimated Time for Next Session**: [X] hours **Priority**: [High/Medium/Low] ([Why]) **Blocking Other Work**: [Yes/No] ([Details if yes]) **Next Milestone After This**: [What comes next] ``` ## Important Guidelines 1. **Be Specific**: Include actual file paths, line numbers, command examples 2. **Be Realistic**: Time estimates should account for testing and debugging 3. **Be Complete**: Future agent should not need to ask "what do I do next?" 4. **Include Context**: Explain WHY things were done, not just WHAT 5. **Flag Unknowns**: If something is unclear, document it in Outstanding Questions 6. **Test Commands**: Every task should have a specific test/verification command 7. **Link to Docs**: Reference other implementation docs with @ syntax when relevant ## After Generating 1. Write the complete document to `.claude/implementation/NEXT_SESSION.md` 2. Show the user a brief summary of: - Completion percentage - Number of commits analyzed - Number of tasks created for next session - Any critical blockers or questions 3. Ask if they want to add anything before finalizing --- **Now begin the analysis and generate the project plan document.**