CLAUDE: Update project plan for Week 7 continuation
Project Plan Updates: - Updated NEXT_SESSION.md with comprehensive Week 7 status - Current status: Phase 3 Week 7 at 17% complete (1 of 6 tasks done) - Documented Task 1 completion (Strategic Decision Integration) - Created detailed task breakdown for Tasks 2-7 - Added architectural decision documentation - Included outstanding questions and verification steps Session Accomplishments: - Task 1: Strategic Decision Integration (100% complete) - GameState: Added decision phase tracking and AI helper methods - StateManager: Implemented asyncio.Future decision queue - GameEngine: Added await_defensive/offensive_decision methods - AI Opponent: Created stub with default decisions - Bonus: Manual outcome testing feature for terminal client - Planning: Complete Week 7 plan document (25 pages) Next Session Tasks: 1. Task 2: Decision Validators (3-4h) 2. Task 3: Result Charts Part A (4-5h) 3. Task 4: Result Charts Part B (4-5h) 4. Task 5: Double Play Mechanics (2-3h) 5. Task 6: WebSocket Handlers (3-4h) 6. Task 7: Terminal Client Enhancement (2-3h) Slash Commands: - Added save-project-plan.md command definition Test Status: 200/201 passing Target: 285+ after Week 7 complete 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
95d8703f56
commit
0a21edad5c
270
.claude/commands/save-project-plan.md
Normal file
270
.claude/commands/save-project-plan.md
Normal file
@ -0,0 +1,270 @@
|
|||||||
|
# Save Project Plan Command
|
||||||
|
|
||||||
|
You are helping the user create a comprehensive project plan recap that will be saved to `.claude/implementation/NEXT_SESSION.md`. This is a **single file** that gets completely overwritten each session to provide the next AI agent with maximum context.
|
||||||
|
|
||||||
|
## Your Task
|
||||||
|
|
||||||
|
Automatically analyze the current project state and generate a complete project plan document for the next coding session.
|
||||||
|
|
||||||
|
## Analysis Steps
|
||||||
|
|
||||||
|
### 1. Gather Git Context
|
||||||
|
Run these commands to understand recent work:
|
||||||
|
```bash
|
||||||
|
# Get last 10 commits with detailed info
|
||||||
|
git log -10 --pretty=format:"%h - %s (%ar)" --no-merges
|
||||||
|
|
||||||
|
# Get current branch and status
|
||||||
|
git status --short
|
||||||
|
|
||||||
|
# Get files changed in last 5 commits
|
||||||
|
git diff --name-status HEAD~5..HEAD
|
||||||
|
|
||||||
|
# Get current branch name
|
||||||
|
git branch --show-current
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Read Current Plan
|
||||||
|
Read `.claude/implementation/NEXT_SESSION.md` to understand:
|
||||||
|
- What was planned for this session
|
||||||
|
- What completion % we started at
|
||||||
|
- What tasks were on the list
|
||||||
|
|
||||||
|
### 3. Read Implementation Index
|
||||||
|
Read `.claude/implementation/00-index.md` to understand:
|
||||||
|
- Overall project status
|
||||||
|
- Which phase/week we're in
|
||||||
|
- What's completed vs. pending
|
||||||
|
|
||||||
|
### 4. Analyze File Changes
|
||||||
|
For the most recently modified files (from git diff), determine:
|
||||||
|
- What features were implemented
|
||||||
|
- What tests were added
|
||||||
|
- What architectural changes were made
|
||||||
|
|
||||||
|
## Document Structure
|
||||||
|
|
||||||
|
Generate a NEW `NEXT_SESSION.md` file with this exact structure:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Next Session Plan - [Phase/Week Name]
|
||||||
|
|
||||||
|
**Current Status**: [Phase] - [X]% Complete
|
||||||
|
**Last Commit**: `[hash]` - "[commit message]"
|
||||||
|
**Date**: [YYYY-MM-DD]
|
||||||
|
**Remaining Work**: [X]% ([N] tasks)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start for Next AI Agent
|
||||||
|
|
||||||
|
### 🎯 Where to Begin
|
||||||
|
1. Read this entire document first
|
||||||
|
2. Review files in "Files to Review Before Starting" section
|
||||||
|
3. Start with Task 1 in "Tasks for Next Session"
|
||||||
|
4. Run test commands after each task
|
||||||
|
|
||||||
|
### 📍 Current Context
|
||||||
|
[2-3 sentences describing exactly where we are in the project and what the immediate focus is]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What We Just Completed ✅
|
||||||
|
|
||||||
|
[Analyze git commits and file changes to document what was done this session]
|
||||||
|
|
||||||
|
### 1. [Feature/Component Name]
|
||||||
|
- `path/to/file.py` - [What changed]
|
||||||
|
- [Specific implementation details]
|
||||||
|
- [N] tests passing
|
||||||
|
|
||||||
|
### 2. [Next Feature]
|
||||||
|
- [Details...]
|
||||||
|
|
||||||
|
[Continue for all major changes]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Architecture Decisions Made
|
||||||
|
|
||||||
|
[Extract any important architectural choices, patterns adopted, or technical decisions]
|
||||||
|
|
||||||
|
1. **[Decision Name]**: [Rationale and implications]
|
||||||
|
2. **[Decision Name]**: [Rationale and implications]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Blockers Encountered 🚧
|
||||||
|
|
||||||
|
[List any issues that blocked progress or need attention]
|
||||||
|
|
||||||
|
- **[Blocker Title]**: [Description, current status, what needs to be done]
|
||||||
|
|
||||||
|
[If none, write "None - development proceeded smoothly"]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Outstanding Questions ❓
|
||||||
|
|
||||||
|
[List unresolved questions or decisions that need to be made]
|
||||||
|
|
||||||
|
1. **[Question]**: [Context and why it matters]
|
||||||
|
2. **[Question]**: [Context and implications]
|
||||||
|
|
||||||
|
[If none, write "None at this time"]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tasks for Next Session
|
||||||
|
|
||||||
|
[Generate specific, actionable tasks with time estimates]
|
||||||
|
|
||||||
|
### Task 1: [Task Name] ([Time estimate])
|
||||||
|
|
||||||
|
**File(s)**: `path/to/file.py`
|
||||||
|
|
||||||
|
**Goal**: [Clear description of what needs to be done]
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
1. [Specific change with code example if helpful]
|
||||||
|
2. [Next specific change]
|
||||||
|
|
||||||
|
**Files to Update**:
|
||||||
|
- `path/to/file1.py` - [What to change]
|
||||||
|
- `path/to/file2.py` - [What to change]
|
||||||
|
|
||||||
|
**Test Command**:
|
||||||
|
```bash
|
||||||
|
[Exact command to verify this task]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Acceptance Criteria**:
|
||||||
|
- [ ] [Specific criterion]
|
||||||
|
- [ ] [Specific criterion]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: [Task Name] ([Time estimate])
|
||||||
|
|
||||||
|
[Same structure as Task 1]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[Continue for all tasks - aim for 3-7 tasks]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files to Review Before Starting
|
||||||
|
|
||||||
|
[List critical files the next AI agent should read with specific line ranges if relevant]
|
||||||
|
|
||||||
|
1. `path/to/file.py:45-120` - [Why this is important]
|
||||||
|
2. `path/to/file2.py` - [Context about this file]
|
||||||
|
3. `.claude/implementation/[relevant-doc].md` - [What this contains]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Steps
|
||||||
|
|
||||||
|
After completing all tasks:
|
||||||
|
|
||||||
|
1. **Run all tests**:
|
||||||
|
```bash
|
||||||
|
[Specific test commands]
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Manual testing** (if applicable):
|
||||||
|
```bash
|
||||||
|
[Terminal client commands or manual steps]
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Database verification** (if applicable):
|
||||||
|
```sql
|
||||||
|
[Specific queries to verify state]
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Commit changes**:
|
||||||
|
```bash
|
||||||
|
git add [files]
|
||||||
|
git commit -m "CLAUDE: [commit message]"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
[Phase/Week] will be **[X]% complete** when:
|
||||||
|
|
||||||
|
- [ ] [Specific criterion with test count if applicable]
|
||||||
|
- [ ] [Specific criterion]
|
||||||
|
- [ ] [Specific criterion]
|
||||||
|
- [ ] Documentation updated
|
||||||
|
- [ ] Git commit created
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
**Current Test Count**: [N] tests ([breakdown by area])
|
||||||
|
**Last Test Run**: [Status] ([Date])
|
||||||
|
**Branch**: `[branch-name]`
|
||||||
|
**Python**: [version]
|
||||||
|
**Virtual Env**: `backend/venv/`
|
||||||
|
|
||||||
|
**Key Imports for Next Session**:
|
||||||
|
```python
|
||||||
|
[List the most important imports the next agent will need]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Recent Commit History** (Last 10):
|
||||||
|
```
|
||||||
|
[Insert git log output from step 1]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context for AI Agent Resume
|
||||||
|
|
||||||
|
**If the next agent needs to understand the bigger picture**:
|
||||||
|
- Overall project: See `@prd-web-scorecard-1.1.md` and `@CLAUDE.md`
|
||||||
|
- Architecture: See `@.claude/implementation/00-index.md`
|
||||||
|
- Current phase details: See `@.claude/implementation/[current-phase].md`
|
||||||
|
|
||||||
|
**Critical files in current focus area**:
|
||||||
|
[List 5-10 most important files for the current work]
|
||||||
|
|
||||||
|
**What NOT to do**:
|
||||||
|
- [Any gotchas or common mistakes to avoid]
|
||||||
|
- [Patterns we've decided NOT to use and why]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Estimated Time for Next Session**: [X] hours
|
||||||
|
**Priority**: [High/Medium/Low] ([Why])
|
||||||
|
**Blocking Other Work**: [Yes/No] ([Details if yes])
|
||||||
|
**Next Milestone After This**: [What comes next]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Important Guidelines
|
||||||
|
|
||||||
|
1. **Be Specific**: Include actual file paths, line numbers, command examples
|
||||||
|
2. **Be Realistic**: Time estimates should account for testing and debugging
|
||||||
|
3. **Be Complete**: Future agent should not need to ask "what do I do next?"
|
||||||
|
4. **Include Context**: Explain WHY things were done, not just WHAT
|
||||||
|
5. **Flag Unknowns**: If something is unclear, document it in Outstanding Questions
|
||||||
|
6. **Test Commands**: Every task should have a specific test/verification command
|
||||||
|
7. **Link to Docs**: Reference other implementation docs with @ syntax when relevant
|
||||||
|
|
||||||
|
## After Generating
|
||||||
|
|
||||||
|
1. Write the complete document to `.claude/implementation/NEXT_SESSION.md`
|
||||||
|
2. Show the user a brief summary of:
|
||||||
|
- Completion percentage
|
||||||
|
- Number of commits analyzed
|
||||||
|
- Number of tasks created for next session
|
||||||
|
- Any critical blockers or questions
|
||||||
|
3. Ask if they want to add anything before finalizing
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Now begin the analysis and generate the project plan document.**
|
||||||
@ -1,137 +1,152 @@
|
|||||||
# Next Session Plan - Phase 3 Gateway
|
# Next Session Plan - Phase 3 Week 7 in Progress
|
||||||
|
|
||||||
**Current Status**: Phase 2 - Week 6 Complete (100%)
|
**Current Status**: Phase 3 - Week 7 (~17% Complete)
|
||||||
**Last Commit**: Not yet committed - "CLAUDE: Complete Week 6 - granular PlayOutcome integration and metadata support"
|
**Last Commit**: `95d8703` - "CLAUDE: Implement Week 7 Task 1 - Strategic Decision Integration"
|
||||||
**Date**: 2025-10-29
|
**Date**: 2025-10-29
|
||||||
**Next Milestone**: Phase 3 - Complete Game Features
|
**Remaining Work**: 83% (5 of 6 tasks remaining)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Quick Start for Next AI Agent
|
## Quick Start for Next AI Agent
|
||||||
|
|
||||||
### 🎯 Where to Begin
|
### 🎯 Where to Begin
|
||||||
1. **First**: Commit the completed Week 6 work (see Verification Steps below)
|
1. Read this entire document first
|
||||||
2. **Then**: Read Phase 3 plan at `@.claude/implementation/03-gameplay-features.md`
|
2. Review `@.claude/implementation/WEEK_7_PLAN.md` for comprehensive task details
|
||||||
3. **Review**: Test with terminal client to verify all systems working
|
3. Start with **Task 2: Decision Validators** (next logical step)
|
||||||
4. **Start**: Phase 3 planning - strategic decisions and full result charts
|
4. Run tests after each change: `pytest tests/unit/core/test_validators.py -v`
|
||||||
|
|
||||||
### 📍 Current Context
|
### 📍 Current Context
|
||||||
Week 6 is **100% complete**! We've successfully integrated the granular PlayOutcome enum system throughout the codebase, updated the dice system with the chaos_d20 naming, and added metadata support for uncapped hits. The game engine now has a solid foundation for Phase 3's advanced features including full card-based resolution, strategic decisions, and uncapped hit decision trees.
|
**Week 7 Task 1 is complete!** We've implemented the async decision workflow infrastructure that integrates AI and human decision-making. The GameEngine now has `await_defensive_decision()` and `await_offensive_decision()` methods that use asyncio Futures for WebSocket communication. AI teams get instant decisions, human teams wait with timeout.
|
||||||
|
|
||||||
|
**Next up**: Validate those decisions! Task 2 adds `validate_defensive_decision()` and `validate_offensive_decision()` to the validators module to ensure decisions are legal for the current game state.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## What We Just Completed ✅
|
## What We Just Completed ✅
|
||||||
|
|
||||||
### 1. Dice System Update - chaos_d20 Rename
|
### 1. Strategic Decision Integration (Week 7 Task 1) - 100%
|
||||||
- `app/core/roll_types.py` - Renamed `check_d20` → `chaos_d20` in AbRoll dataclass
|
- **GameState Model** (`app/models/game_models.py`)
|
||||||
- `app/core/dice.py` - Updated DiceSystem.roll_ab() to use chaos_d20
|
- Added `pending_defensive_decision` and `pending_offensive_decision` fields
|
||||||
- `app/core/play_resolver.py` - Updated SimplifiedResultChart to reference chaos_d20
|
- Added `decision_phase` tracking (idle, awaiting_defensive, awaiting_offensive, resolving, completed)
|
||||||
- Updated docstrings: "chaos die (1=WP check, 2=PB check, 3+=normal)"
|
- Added `decision_deadline` field (ISO8601 timestamp) for timeout handling
|
||||||
- Cleaned up string output: Only displays resolution_d20, not chaos_d20
|
- Added `is_batting_team_ai()` and `is_fielding_team_ai()` helper methods
|
||||||
- **Tests**: 34/35 dice tests passing (1 pre-existing timing issue)
|
- Added validator for `decision_phase` field
|
||||||
- **Tests**: 27/27 roll_types tests passing
|
- **Impact**: State can now track the full decision workflow lifecycle
|
||||||
|
|
||||||
### 2. PlayOutcome Enum - Granular Variants
|
- **StateManager** (`app/core/state_manager.py`)
|
||||||
- `app/config/result_charts.py` - Expanded PlayOutcome with granular variants:
|
- Added `_pending_decisions` dict for asyncio.Future-based decision queue
|
||||||
- **Groundballs**: GROUNDBALL_A (DP opportunity), GROUNDBALL_B, GROUNDBALL_C
|
- Implemented `set_pending_decision()` - Creates futures for pending decisions
|
||||||
- **Flyouts**: FLYOUT_A, FLYOUT_B, FLYOUT_C
|
- Implemented `await_decision()` - Waits for decision submission (blocks until resolved)
|
||||||
- **Singles**: SINGLE_1 (standard), SINGLE_2 (enhanced), SINGLE_UNCAPPED
|
- Implemented `submit_decision()` - Resolves pending futures when decisions submitted
|
||||||
- **Doubles**: DOUBLE_2 (to 2nd), DOUBLE_3 (to 3rd), DOUBLE_UNCAPPED
|
- Implemented `cancel_pending_decision()` - Cleanup on timeout/abort
|
||||||
- Removed old enums: GROUNDOUT, FLYOUT, DOUBLE_PLAY, SINGLE, DOUBLE
|
- **Impact**: Enables async decision workflow with WebSocket integration ready
|
||||||
- Updated helper methods: `is_hit()`, `is_out()`, `is_extra_base_hit()`, `get_bases_advanced()`
|
|
||||||
- **Tests**: 30/30 config tests passing
|
|
||||||
|
|
||||||
### 3. PlayResolver Integration
|
- **GameEngine** (`app/core/game_engine.py`)
|
||||||
- `app/core/play_resolver.py` - Removed local PlayOutcome enum, imported from app.config
|
- Added `DECISION_TIMEOUT` constant (30 seconds)
|
||||||
- Updated SimplifiedResultChart.get_outcome() with new roll distribution:
|
- Implemented `await_defensive_decision()` with AI/human branching and timeout
|
||||||
- Rolls 6-8: GROUNDBALL_A/B/C
|
- Implemented `await_offensive_decision()` with AI/human branching and timeout
|
||||||
- Rolls 9-11: FLYOUT_A/B/C
|
- Enhanced `submit_defensive_decision()` to resolve pending futures
|
||||||
- Rolls 12-13: WALK
|
- Enhanced `submit_offensive_decision()` to resolve pending futures
|
||||||
- Rolls 14-15: SINGLE_1/2
|
- **Impact**: Fully async decision workflow, backward compatible with terminal client
|
||||||
- Rolls 16-17: DOUBLE_2/3
|
|
||||||
- Roll 18: LINEOUT
|
|
||||||
- Roll 19: TRIPLE
|
|
||||||
- Roll 20: HOMERUN
|
|
||||||
- Added handlers for all new outcome variants in `_resolve_outcome()`
|
|
||||||
- SINGLE_UNCAPPED → treated as SINGLE_1 (TODO Phase 3: decision tree)
|
|
||||||
- DOUBLE_UNCAPPED → treated as DOUBLE_2 (TODO Phase 3: decision tree)
|
|
||||||
- GROUNDBALL_A → includes TODO for double play logic (Phase 3)
|
|
||||||
- **Tests**: 19/19 play resolver tests passing
|
|
||||||
|
|
||||||
### 4. Play.metadata Support
|
- **AI Opponent Stub** (`app/core/ai_opponent.py`)
|
||||||
- `app/core/game_engine.py` - Added metadata tracking in `_save_play_to_db()`:
|
- Created new module with `AIOpponent` class
|
||||||
```python
|
- `generate_defensive_decision()` - Returns default "normal" positioning (stub)
|
||||||
play_metadata = {}
|
- `generate_offensive_decision()` - Returns default "normal" approach (stub)
|
||||||
if result.outcome in [PlayOutcome.SINGLE_UNCAPPED, PlayOutcome.DOUBLE_UNCAPPED]:
|
- TODO markers for Week 9 full AI implementation
|
||||||
play_metadata["uncapped"] = True
|
- **Impact**: AI teams can play immediately, full logic deferred to Week 9
|
||||||
play_metadata["outcome_type"] = result.outcome.value
|
|
||||||
play_data["play_metadata"] = play_metadata
|
|
||||||
```
|
|
||||||
- Verified Play model already has `play_metadata` field (JSON, default=dict)
|
|
||||||
- Ready for Phase 3 runner advancement decision tracking
|
|
||||||
- **Tests**: All core tests passing
|
|
||||||
|
|
||||||
### 5. Config Import Compatibility
|
- **Testing**: Config tests 58/58 passing ✅, Terminal client works perfectly ✅
|
||||||
- `app/config/__init__.py` - Added backward compatibility for Settings/get_settings
|
|
||||||
- Allows imports from both `app.config` package and `app.config.py` module
|
### 2. Manual Outcome Testing Feature (Bonus)
|
||||||
- No breaking changes to existing code
|
- **Terminal Client Commands** (`terminal_client/commands.py`, `terminal_client/repl.py`)
|
||||||
|
- Added `list_outcomes` command - Displays categorized table of all 30+ PlayOutcome values
|
||||||
|
- Added `resolve_with <outcome>` command - Force specific outcome for testing
|
||||||
|
- TAB completion for all outcome names
|
||||||
|
- Full help documentation with examples
|
||||||
|
- **Impact**: Can test specific scenarios without random dice rolls
|
||||||
|
|
||||||
|
- **Documentation** (`.claude/implementation/MANUAL_OUTCOME_TESTING.md`)
|
||||||
|
- Complete usage guide
|
||||||
|
- Testing use cases (runner advancement, DP mechanics, scoring)
|
||||||
|
- Implementation details
|
||||||
|
- Future enhancement plan for Week 7 integration
|
||||||
|
|
||||||
|
### 3. Phase 3 Planning Complete
|
||||||
|
- **Week 7 Plan** (`.claude/implementation/WEEK_7_PLAN.md`)
|
||||||
|
- 25-page comprehensive plan with 6 major tasks
|
||||||
|
- Code examples and acceptance criteria for each task
|
||||||
|
- Testing strategy (85+ new tests planned)
|
||||||
|
- **Impact**: Clear roadmap for entire Week 7
|
||||||
|
|
||||||
|
- **Implementation Index Updated** (`.claude/implementation/00-index.md`)
|
||||||
|
- Marked Week 6 as 100% complete
|
||||||
|
- Updated status table with granular completion tracking
|
||||||
|
- Set current phase to "Phase 3 - Complete Game Features (Planning → In Progress)"
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Key Architecture Decisions Made
|
## Key Architecture Decisions Made
|
||||||
|
|
||||||
### 1. **Granular Outcome Variants**
|
### 1. **Async Decision Workflow with Futures**
|
||||||
**Decision**: Break SINGLE into SINGLE_1/2/UNCAPPED, DOUBLE into DOUBLE_2/3/UNCAPPED, etc.
|
**Decision**: Use `asyncio.Future` for decision queue instead of simple flag polling
|
||||||
|
|
||||||
**Rationale**:
|
**Rationale**:
|
||||||
- Different advancement rules per variant (e.g., DOUBLE_3 = batter to 3rd instead of 2nd)
|
- True async/await pattern - GameEngine blocks until decision arrives
|
||||||
- Groundball variants for future DP logic (GROUNDBALL_A checks for DP opportunity)
|
- Timeout naturally handled with `asyncio.wait_for()`
|
||||||
- Flyout variants for different trajectories/depths
|
- Clean separation between decision request and resolution
|
||||||
- Uncapped variants clearly distinguished for metadata tracking
|
- Works seamlessly with WebSocket event handlers
|
||||||
|
|
||||||
**Impact**:
|
**Impact**:
|
||||||
- More detailed outcome tracking in database
|
- GameEngine can await human decisions without polling
|
||||||
- Easier to implement Phase 3 advancement decisions
|
- WebSocket handlers simply call `state_manager.submit_decision()`
|
||||||
- Play-by-play logs more descriptive
|
- Timeout automatically applies default decision (no game blocking)
|
||||||
|
- Fully backward compatible with terminal client
|
||||||
|
|
||||||
### 2. **Chaos Die Naming**
|
### 2. **AI/Human Decision Branching**
|
||||||
**Decision**: Rename `check_d20` → `chaos_d20`
|
**Decision**: Check AI status in `await_*_decision()` methods, not in submit handlers
|
||||||
|
|
||||||
**Rationale**:
|
**Rationale**:
|
||||||
- More descriptive of its purpose (5% chance chaos events)
|
- AI teams get instant decisions (no waiting)
|
||||||
- Distinguishes from resolution_d20 (split resolution)
|
- Human teams use WebSocket workflow
|
||||||
- Clearer intent in code and logs
|
- Same code path for both after decision obtained
|
||||||
|
- AI logic centralized in `ai_opponent` module
|
||||||
|
|
||||||
**Impact**:
|
**Impact**:
|
||||||
- More understandable codebase
|
- AI games play at full speed
|
||||||
- Better self-documenting code
|
- Human games have proper timeout handling
|
||||||
- Clearer for future contributors
|
- Easy to swap AI difficulty levels
|
||||||
|
- Clear separation of concerns
|
||||||
|
|
||||||
### 3. **Metadata for Uncapped Hits**
|
### 3. **Decision Phase State Machine**
|
||||||
**Decision**: Use `play_metadata` JSON field for uncapped hit tracking
|
**Decision**: Add explicit `decision_phase` field to GameState
|
||||||
|
|
||||||
**Rationale**:
|
**Rationale**:
|
||||||
- Flexible schema for Phase 3 decision tree data
|
- Frontend needs to know what's expected (awaiting defensive, awaiting offensive, resolving)
|
||||||
- No database migrations needed
|
- Can display appropriate UI based on phase
|
||||||
- Easy to query and filter uncapped plays
|
- Easy to debug decision workflow
|
||||||
|
- Timestamp deadline for timeout UI
|
||||||
|
|
||||||
**Impact**:
|
**Impact**:
|
||||||
- Ready for Phase 3 runner advancement tracking
|
- Frontend can show countdown timers
|
||||||
- Can store complex decision data without schema changes
|
- Clear state transitions for WebSocket events
|
||||||
- Analytics-friendly structure
|
- Better user experience (know what's happening)
|
||||||
|
- Easier to implement async game mode
|
||||||
|
|
||||||
### 4. **Simplified for MVP, TODOs for Phase 3**
|
### 4. **Stub AI for Week 7, Full AI for Week 9**
|
||||||
**Decision**: Treat uncapped hits as standard hits for now, add TODOs for full implementation
|
**Decision**: Return sensible defaults now, defer complex AI logic
|
||||||
|
|
||||||
**Rationale**:
|
**Rationale**:
|
||||||
- Completes Week 6 foundation without blocking
|
- Week 7 focus is decision infrastructure, not AI strategy
|
||||||
- Clear markers for Phase 3 work
|
- Default decisions allow testing without blocking
|
||||||
- Test coverage proves integration works
|
- AI logic is complex enough to deserve dedicated time
|
||||||
|
- Can test human vs human immediately
|
||||||
|
|
||||||
**Impact**:
|
**Impact**:
|
||||||
- Can proceed to Phase 3 immediately
|
- Week 7 can proceed without AI complexity
|
||||||
- No broken functionality
|
- AI vs AI games work (just use defaults)
|
||||||
- Clear roadmap for next steps
|
- Clear TODO markers for Week 9 work
|
||||||
|
- Testing not blocked on AI implementation
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -143,246 +158,344 @@ Week 6 is **100% complete**! We've successfully integrated the granular PlayOutc
|
|||||||
|
|
||||||
## Outstanding Questions ❓
|
## Outstanding Questions ❓
|
||||||
|
|
||||||
### 1. **Phase 3 Priority Order**
|
### 1. **Default Decision on Timeout**
|
||||||
**Question**: Should we implement full card-based resolution first, or strategic decisions first?
|
**Question**: Should default decision be logged/notified to user?
|
||||||
|
|
||||||
**Context**: Both are major Phase 3 components. Card resolution unlocks PD league auto-resolve. Strategic decisions unlock defensive shifts and offensive tactics.
|
**Context**: When human player times out, we apply default decision (all "normal" values). Should we:
|
||||||
|
- A) Silently apply defaults (current implementation)
|
||||||
|
- B) Log and broadcast to game room that default was used
|
||||||
|
- C) Show warning message to timed-out player
|
||||||
|
|
||||||
**Recommendation**: Start with strategic decisions (simpler), then card resolution (more complex).
|
**Recommendation**: Option B - broadcast to game room for transparency, but don't block game flow.
|
||||||
|
|
||||||
### 2. **Double Play Logic Complexity**
|
### 2. **WebSocket Event Names**
|
||||||
**Question**: How complex should the DP logic be for GROUNDBALL_A?
|
**Question**: What should we call the decision notification events?
|
||||||
|
|
||||||
**Context**: Real Strat-O-Matic has situational DP chances based on speed, position, outs. Do we implement full complexity or simplified version for MVP?
|
**Context**: Need event names for:
|
||||||
|
- Requesting decision from player
|
||||||
|
- Decision submitted successfully
|
||||||
|
- Decision timeout/default applied
|
||||||
|
|
||||||
**Recommendation**: Simplified for MVP (basic speed check), full complexity post-MVP.
|
**Proposed Names**:
|
||||||
|
- `decision_required` - Notify player their decision is needed
|
||||||
|
- `decision_accepted` - Confirm decision received
|
||||||
|
- `decision_submitted` - Broadcast to room that decision was made
|
||||||
|
- `decision_timeout` - Notify room that default was applied
|
||||||
|
|
||||||
### 3. **Terminal Client Enhancement**
|
**Recommendation**: Use proposed names, add to `websocket/events.py` in Task 4.
|
||||||
**Question**: Should we add a "quick game" mode to terminal client for faster testing?
|
|
||||||
|
|
||||||
**Context**: Currently need to manually select decisions for each play. Auto-resolve mode would speed up testing.
|
### 3. **Validation Order**
|
||||||
|
**Question**: Validate decisions before or after storing in state?
|
||||||
|
|
||||||
**Recommendation**: Add in Phase 3 alongside AI opponent work.
|
**Context**: Current flow stores decision in state, then validates. Alternative is validate first, then store.
|
||||||
|
|
||||||
|
**Recommendation**: Keep current order (store then validate) for debugging - can see what invalid decisions were attempted.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Tasks for Next Session
|
## Tasks for Next Session
|
||||||
|
|
||||||
### Task 1: Commit Week 6 Completion (10 min)
|
### Task 2: Decision Validators (3-4 hours)
|
||||||
|
|
||||||
**Goal**: Create git commit for completed Week 6 work
|
**File**: `backend/app/core/validators.py`
|
||||||
|
|
||||||
**Acceptance Criteria**:
|
**Goal**: Add strategic decision validation to ensure decisions are legal for current game state
|
||||||
- [ ] All tests passing (139/140 - 1 pre-existing timing test)
|
|
||||||
- [ ] Git status clean except expected changes
|
|
||||||
- [ ] Commit message follows convention
|
|
||||||
- [ ] Pushed to implement-phase-2 branch
|
|
||||||
|
|
||||||
**Commands**:
|
|
||||||
```bash
|
|
||||||
# Verify tests
|
|
||||||
python -m pytest tests/unit/config/ tests/unit/core/ -v
|
|
||||||
|
|
||||||
# Stage changes
|
|
||||||
git add backend/app/config/ backend/app/core/ backend/tests/
|
|
||||||
|
|
||||||
# Commit
|
|
||||||
git commit -m "CLAUDE: Complete Week 6 - granular PlayOutcome integration and metadata support
|
|
||||||
|
|
||||||
- Renamed check_d20 → chaos_d20 throughout dice system
|
|
||||||
- Expanded PlayOutcome enum with granular variants (SINGLE_1/2, DOUBLE_2/3, GROUNDBALL_A/B/C, etc.)
|
|
||||||
- Integrated PlayOutcome from app.config into PlayResolver
|
|
||||||
- Added play_metadata support for uncapped hit tracking
|
|
||||||
- Updated all tests (139/140 passing)
|
|
||||||
|
|
||||||
Week 6: 100% Complete - Ready for Phase 3"
|
|
||||||
|
|
||||||
# Optional: Push to remote
|
|
||||||
git push origin implement-phase-2
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Task 2: Update Implementation Index (15 min)
|
|
||||||
|
|
||||||
**File**: `.claude/implementation/00-index.md`
|
|
||||||
|
|
||||||
**Goal**: Mark Week 6 complete, update status table
|
|
||||||
|
|
||||||
**Changes**:
|
**Changes**:
|
||||||
1. Update status table:
|
1. Add `validate_defensive_decision(state, decision)` method
|
||||||
- PlayResolver Integration: ✅ Complete
|
2. Add `validate_offensive_decision(state, decision)` method
|
||||||
- PlayOutcome Enum: ✅ Complete
|
3. Validate hold runners exist on bases
|
||||||
- Dice System: ✅ Complete
|
4. Validate steal attempts have runners
|
||||||
2. Update "Decisions Made" section with Week 6 completion date
|
5. Validate bunt/hit-and-run situational requirements
|
||||||
3. Update "Last Updated" footer
|
6. Clear error messages for each validation failure
|
||||||
4. Update "Current Work" to "Phase 3 Planning"
|
|
||||||
|
|
||||||
**Acceptance Criteria**:
|
**Files to Update**:
|
||||||
- [ ] Status table accurate
|
- `app/core/validators.py` - Add two new validation methods
|
||||||
- [ ] Week 6 marked 100% complete
|
- `tests/unit/core/test_validators.py` - Add ~20 test cases
|
||||||
- [ ] Next phase clearly indicated
|
|
||||||
|
|
||||||
---
|
**Test Command**:
|
||||||
|
|
||||||
### Task 3: Review Phase 3 Scope (30 min)
|
|
||||||
|
|
||||||
**Files to Read**:
|
|
||||||
- `@.claude/implementation/03-gameplay-features.md`
|
|
||||||
- `@prd-web-scorecard-1.1.md` (sections on strategic decisions)
|
|
||||||
|
|
||||||
**Goal**: Understand Phase 3 requirements and create task breakdown
|
|
||||||
|
|
||||||
**Tasks**:
|
|
||||||
1. Read Phase 3 documentation
|
|
||||||
2. List all strategic decision types needed:
|
|
||||||
- Defensive: Alignment, depth, hold runners, shifts
|
|
||||||
- Offensive: Steal attempts, bunts, hit-and-run
|
|
||||||
3. Identify card resolution requirements:
|
|
||||||
- PD: Parse batting/pitching ratings into result charts
|
|
||||||
- SBA: Manual entry validation
|
|
||||||
4. Map uncapped hit decision tree requirements
|
|
||||||
5. Create initial Phase 3 task list
|
|
||||||
|
|
||||||
**Output**: Create `PHASE_3_PLAN.md` with task breakdown
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Task 4: Terminal Client Smoke Test (20 min)
|
|
||||||
|
|
||||||
**Goal**: Verify all Week 6 changes work end-to-end
|
|
||||||
|
|
||||||
**Test Procedure**:
|
|
||||||
```bash
|
```bash
|
||||||
cd backend
|
pytest tests/unit/core/test_validators.py -v
|
||||||
source venv/bin/activate
|
|
||||||
python -m terminal_client
|
|
||||||
|
|
||||||
# In REPL:
|
|
||||||
> new_game
|
|
||||||
> defensive normal
|
|
||||||
> offensive normal
|
|
||||||
> resolve
|
|
||||||
|
|
||||||
# Repeat 20 times to see different outcomes
|
|
||||||
> quick_play 20
|
|
||||||
|
|
||||||
# Check for:
|
|
||||||
# - SINGLE_1, SINGLE_2, DOUBLE_2, DOUBLE_3 outcomes
|
|
||||||
# - GROUNDBALL_A/B/C, FLYOUT_A/B/C outcomes
|
|
||||||
# - Clean play descriptions
|
|
||||||
# - No errors
|
|
||||||
|
|
||||||
> status
|
|
||||||
> quit
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Acceptance Criteria**:
|
**Acceptance Criteria**:
|
||||||
- [ ] All outcome types appear in play results
|
- [ ] `validate_defensive_decision()` checks all hold runner validations
|
||||||
- [ ] No crashes or errors
|
- [ ] `validate_defensive_decision()` checks infield depth vs outs (DP with 2 outs = error)
|
||||||
- [ ] Play descriptions accurate
|
- [ ] `validate_offensive_decision()` checks steal attempts have runners on base
|
||||||
- [ ] Scores update correctly
|
- [ ] `validate_offensive_decision()` checks bunt not attempted with 2 outs
|
||||||
|
- [ ] `validate_offensive_decision()` checks hit-and-run has at least one runner
|
||||||
|
- [ ] All validators raise `ValidationError` with clear messages
|
||||||
|
- [ ] 20+ test cases covering all edge cases
|
||||||
|
- [ ] GameEngine integration works (validators called before resolution)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Task 5: Database Metadata Verification (Optional, 15 min)
|
### Task 3: Complete Result Charts - Part A (4-5 hours)
|
||||||
|
|
||||||
**Goal**: Verify play_metadata saves correctly for uncapped hits
|
**File**: `backend/app/config/result_charts.py`
|
||||||
|
|
||||||
**Prerequisites**: Modify SimplifiedResultChart to occasionally return SINGLE_UNCAPPED/DOUBLE_UNCAPPED for testing
|
**Goal**: Implement `StandardResultChart` with defensive/offensive decision modifiers
|
||||||
|
|
||||||
**Test Procedure**:
|
**Changes**:
|
||||||
|
1. Create abstract `ResultChart` base class
|
||||||
|
2. Implement `StandardResultChart.get_outcome()` with decision modifiers
|
||||||
|
3. Implement hit location logic (pull/center/opposite distribution)
|
||||||
|
4. Add defensive modifier methods (shifts, depths affect outcomes)
|
||||||
|
5. Add offensive modifier methods (approaches affect outcomes)
|
||||||
|
|
||||||
|
**Files to Update**:
|
||||||
|
- `app/config/result_charts.py` - Replace `SimplifiedResultChart`
|
||||||
|
- `tests/unit/config/test_result_charts.py` - Add ~25 test cases
|
||||||
|
|
||||||
|
**Test Command**:
|
||||||
```bash
|
```bash
|
||||||
# Add temporary test code to force uncapped outcome
|
pytest tests/unit/config/test_result_charts.py -v
|
||||||
# In play_resolver.py SimplifiedResultChart.get_outcome():
|
|
||||||
# if roll == 14: return PlayOutcome.SINGLE_UNCAPPED
|
|
||||||
|
|
||||||
# Run terminal client, get a few plays
|
|
||||||
python -m terminal_client
|
|
||||||
> new_game
|
|
||||||
> quick_play 50
|
|
||||||
|
|
||||||
# Check database
|
|
||||||
psql postgresql://paperdynasty:PASSWORD@10.10.0.42:5432/paperdynasty_dev
|
|
||||||
SELECT id, hit_type, play_metadata FROM plays WHERE play_metadata != '{}';
|
|
||||||
|
|
||||||
# Expected: Rows with play_metadata = {"uncapped": true, "outcome_type": "single_uncapped"}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Acceptance Criteria**:
|
**Acceptance Criteria**:
|
||||||
- [ ] play_metadata saves non-empty for uncapped hits
|
- [ ] `ResultChart` abstract base class with `get_outcome()` method
|
||||||
- [ ] JSON structure correct
|
- [ ] `StandardResultChart` uses defensive decision to modify outcomes
|
||||||
- [ ] outcome_type matches hit_type
|
- [ ] Infield in: GROUNDBALL_B → better chance to get lead runner
|
||||||
|
- [ ] Infield back: GROUNDBALL_A → sacrifice DP opportunity
|
||||||
|
- [ ] Shift: Affects hit location distribution
|
||||||
|
- [ ] Contact approach: Fewer strikeouts, more weak contact
|
||||||
|
- [ ] Power approach: More strikeouts, more extra-base hits
|
||||||
|
- [ ] Bunt: Converts to bunt outcomes
|
||||||
|
- [ ] 25+ test cases for all modifiers
|
||||||
|
- [ ] Hit location logic with handedness (RHB pulls left, LHB pulls right)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Files to Review Before Starting Phase 3
|
### Task 4: Complete Result Charts - Part B (4-5 hours)
|
||||||
|
|
||||||
1. `@.claude/implementation/03-gameplay-features.md` - Phase 3 scope and requirements
|
**File**: `backend/app/config/result_charts.py`
|
||||||
2. `@prd-web-scorecard-1.1.md:780-846` - League config system requirements
|
|
||||||
3. `@prd-web-scorecard-1.1.md:630-669` - WebSocket event specifications
|
**Goal**: Implement runner advancement logic and `PdResultChart`
|
||||||
4. `backend/app/models/game_models.py:45-120` - DefensiveDecision and OffensiveDecision models
|
|
||||||
5. `backend/app/config/base_config.py` - League config structure
|
**Changes**:
|
||||||
6. `backend/app/models/player_models.py:254-495` - PdPlayer with batting/pitching ratings
|
1. Create `RunnerAdvancer` class for runner movement calculation
|
||||||
|
2. Implement `calculate_destination()` for each outcome type
|
||||||
|
3. Implement tag-up rules for flyouts
|
||||||
|
4. Implement force play detection
|
||||||
|
5. Create `PdResultChart` using player card ratings
|
||||||
|
|
||||||
|
**Files to Update**:
|
||||||
|
- `app/config/result_charts.py` - Add `RunnerAdvancer` and `PdResultChart`
|
||||||
|
- `tests/unit/config/test_runner_advancement.py` - Add ~20 test cases
|
||||||
|
|
||||||
|
**Test Command**:
|
||||||
|
```bash
|
||||||
|
pytest tests/unit/config/test_runner_advancement.py -v
|
||||||
|
```
|
||||||
|
|
||||||
|
**Acceptance Criteria**:
|
||||||
|
- [ ] `RunnerAdvancer.advance_runners()` returns list of movements
|
||||||
|
- [ ] Singles: R3 scores, R2 to 3rd, R1 to 2nd
|
||||||
|
- [ ] Doubles: R2+ score, R1 to 3rd
|
||||||
|
- [ ] Flyouts: Tag-up logic (R3 tags on medium/deep fly)
|
||||||
|
- [ ] Groundouts: Score on contact logic (R3 with <2 outs)
|
||||||
|
- [ ] Force plays detected correctly
|
||||||
|
- [ ] `PdResultChart` uses batting/pitching rating probabilities
|
||||||
|
- [ ] Cumulative distribution selects outcome from ratings
|
||||||
|
- [ ] 20+ test cases for all advancement scenarios
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Double Play Mechanics (2-3 hours)
|
||||||
|
|
||||||
|
**File**: `backend/app/core/play_resolver.py`
|
||||||
|
|
||||||
|
**Goal**: Implement double play resolution for GROUNDBALL_A outcomes
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
1. Add `_resolve_double_play_attempt()` method
|
||||||
|
2. Calculate DP probability based on positioning, hit location, speed
|
||||||
|
3. Return (outs_recorded, [runner_ids_out])
|
||||||
|
4. Integrate with `_resolve_outcome()` for GROUNDBALL_A
|
||||||
|
|
||||||
|
**Files to Update**:
|
||||||
|
- `app/core/play_resolver.py` - Add DP resolution logic
|
||||||
|
- `tests/unit/core/test_play_resolver.py` - Add ~10 DP test cases
|
||||||
|
|
||||||
|
**Test Command**:
|
||||||
|
```bash
|
||||||
|
pytest tests/unit/core/test_play_resolver.py::test_double_play -v
|
||||||
|
```
|
||||||
|
|
||||||
|
**Acceptance Criteria**:
|
||||||
|
- [ ] Base DP probability: 45%
|
||||||
|
- [ ] DP depth: +20% probability
|
||||||
|
- [ ] Back depth: -15% probability
|
||||||
|
- [ ] Up the middle (4, 6 positions): +10%
|
||||||
|
- [ ] Corners (3, 5 positions): -10%
|
||||||
|
- [ ] Fast runner: -15%
|
||||||
|
- [ ] Slow runner: +10%
|
||||||
|
- [ ] Returns correct outs and runner IDs
|
||||||
|
- [ ] 10+ test cases for various scenarios
|
||||||
|
- [ ] Terminal client shows "DP" in play description
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 6: WebSocket Handlers (3-4 hours)
|
||||||
|
|
||||||
|
**File**: `backend/app/websocket/handlers.py`
|
||||||
|
|
||||||
|
**Goal**: Create WebSocket event handlers for decision submission
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
1. Add `submit_defensive_decision` event handler
|
||||||
|
2. Add `submit_offensive_decision` event handler
|
||||||
|
3. Validate team_id matches current turn
|
||||||
|
4. Call `state_manager.submit_decision()` to resolve futures
|
||||||
|
5. Broadcast decision submission to game room
|
||||||
|
6. Emit `decision_required` events (for frontend)
|
||||||
|
|
||||||
|
**Files to Update**:
|
||||||
|
- `app/websocket/handlers.py` - Add decision handlers
|
||||||
|
- `tests/unit/websocket/test_decision_handlers.py` - Add ~15 test cases
|
||||||
|
|
||||||
|
**Test Command**:
|
||||||
|
```bash
|
||||||
|
pytest tests/unit/websocket/test_decision_handlers.py -v
|
||||||
|
```
|
||||||
|
|
||||||
|
**Acceptance Criteria**:
|
||||||
|
- [ ] `submit_defensive_decision` validates team_id
|
||||||
|
- [ ] `submit_defensive_decision` calls validators
|
||||||
|
- [ ] `submit_defensive_decision` resolves pending future
|
||||||
|
- [ ] `submit_offensive_decision` same validations
|
||||||
|
- [ ] Error handling with clear messages
|
||||||
|
- [ ] Broadcast `decision_submitted` to game room
|
||||||
|
- [ ] Emit `decision_required` when awaiting decisions
|
||||||
|
- [ ] 15+ test cases for all scenarios
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 7: Terminal Client Enhancement (2-3 hours)
|
||||||
|
|
||||||
|
**File**: `backend/terminal_client/commands.py`
|
||||||
|
|
||||||
|
**Goal**: Add support for all decision options in terminal client
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
1. Enhance `defensive` command with all options
|
||||||
|
2. Enhance `offensive` command with all options
|
||||||
|
3. Update help text with examples
|
||||||
|
4. Add validation error display
|
||||||
|
|
||||||
|
**Files to Update**:
|
||||||
|
- `terminal_client/commands.py` - Enhanced option parsing
|
||||||
|
- `terminal_client/help_text.py` - Updated documentation
|
||||||
|
- `terminal_client/completions.py` - TAB completion for all options
|
||||||
|
|
||||||
|
**Test Command**:
|
||||||
|
```bash
|
||||||
|
python -m terminal_client
|
||||||
|
> help defensive
|
||||||
|
> help offensive
|
||||||
|
> defensive shifted_left double_play normal --hold 1,3
|
||||||
|
> offensive power --steal 2
|
||||||
|
```
|
||||||
|
|
||||||
|
**Acceptance Criteria**:
|
||||||
|
- [ ] All defensive options work (alignment, depths, hold runners)
|
||||||
|
- [ ] All offensive options work (approach, steals, hit-run, bunt)
|
||||||
|
- [ ] TAB completion for all option values
|
||||||
|
- [ ] Help text shows all examples
|
||||||
|
- [ ] Validation errors display clearly
|
||||||
|
- [ ] Manual testing shows all features work
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files to Review Before Starting
|
||||||
|
|
||||||
|
1. `app/core/validators.py` - Existing validation patterns to follow
|
||||||
|
2. `app/config/result_charts.py` - SimplifiedResultChart to enhance
|
||||||
|
3. `app/core/play_resolver.py` - Integration point for new result charts
|
||||||
|
4. `app/websocket/handlers.py` - Existing handler patterns
|
||||||
|
5. `.claude/implementation/WEEK_7_PLAN.md:120-600` - Detailed code examples for all tasks
|
||||||
|
6. `tests/unit/core/test_validators.py` - Test patterns to follow
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Verification Steps
|
## Verification Steps
|
||||||
|
|
||||||
After Task 1 (commit):
|
After each task:
|
||||||
|
|
||||||
1. **Verify clean git state**:
|
1. **Run unit tests**:
|
||||||
```bash
|
```bash
|
||||||
git status
|
# After Task 2
|
||||||
# Should show: nothing to commit, working tree clean
|
pytest tests/unit/core/test_validators.py -v
|
||||||
|
|
||||||
|
# After Task 3
|
||||||
|
pytest tests/unit/config/test_result_charts.py -v
|
||||||
|
|
||||||
|
# After Task 4
|
||||||
|
pytest tests/unit/config/test_runner_advancement.py -v
|
||||||
|
|
||||||
|
# After Task 5
|
||||||
|
pytest tests/unit/core/test_play_resolver.py -v
|
||||||
|
|
||||||
|
# After Task 6
|
||||||
|
pytest tests/unit/websocket/test_decision_handlers.py -v
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Verify all tests pass**:
|
2. **Terminal client testing**:
|
||||||
```bash
|
|
||||||
pytest tests/unit/config/ tests/unit/core/ -v
|
|
||||||
# Expected: 139/140 passing (1 pre-existing timing test fails)
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify terminal client works**:
|
|
||||||
```bash
|
```bash
|
||||||
python -m terminal_client
|
python -m terminal_client
|
||||||
# Should start without errors
|
> new_game
|
||||||
# > new_game should work
|
> defensive shifted_left double_play normal
|
||||||
# > resolve should produce varied outcomes
|
> offensive power --steal 2
|
||||||
|
> resolve
|
||||||
|
> status
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Integration test**:
|
||||||
|
```bash
|
||||||
|
pytest tests/integration/test_strategic_gameplay.py -v
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Commit after each task**:
|
||||||
|
```bash
|
||||||
|
git add [modified files]
|
||||||
|
git commit -m "CLAUDE: Implement Week 7 Task [N] - [Task Name]"
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Success Criteria
|
## Success Criteria
|
||||||
|
|
||||||
**Week 6** is **100% complete** when:
|
**Week 7** will be **100% complete** when:
|
||||||
|
|
||||||
- ✅ chaos_d20 renamed throughout codebase
|
- ✅ Task 1: Strategic Decision Integration (DONE)
|
||||||
- ✅ PlayOutcome enum has granular variants (SINGLE_1/2, DOUBLE_2/3, GROUNDBALL_A/B/C, etc.)
|
- [ ] Task 2: Decision Validators (~20 tests passing)
|
||||||
- ✅ PlayResolver uses universal PlayOutcome from app.config
|
- [ ] Task 3: Result Charts Part A (~25 tests passing)
|
||||||
- ✅ play_metadata supports uncapped hit tracking
|
- [ ] Task 4: Result Charts Part B (~20 tests passing)
|
||||||
- ✅ 139/140 tests passing (1 pre-existing timing issue)
|
- [ ] Task 5: Double Play Mechanics (~10 tests passing)
|
||||||
- ✅ Terminal client demonstrates all new outcomes
|
- [ ] Task 6: WebSocket Handlers (~15 tests passing)
|
||||||
- ✅ Git commit created
|
- [ ] Task 7: Terminal Client Enhancement (manual testing passes)
|
||||||
- ⏳ Documentation updated (Task 2)
|
- [ ] All 85+ new tests passing
|
||||||
- ⏳ Phase 3 planned (Task 3)
|
- [ ] Terminal client demonstrates all features
|
||||||
|
- [ ] Documentation updated
|
||||||
|
- [ ] Git commits for each task
|
||||||
|
|
||||||
**Phase 3** will begin when:
|
**Week 8** will begin with:
|
||||||
- ✅ Week 6 committed and documented
|
- Substitution system (pinch hitters, defensive replacements)
|
||||||
- ✅ Phase 3 tasks identified and prioritized
|
- Pitching changes (bullpen management)
|
||||||
- ✅ Strategic decision implementation plan created
|
- Frontend game interface (mobile-first)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Quick Reference
|
## Quick Reference
|
||||||
|
|
||||||
**Current Test Count**: 139/140 passing
|
**Current Test Count**: 200/201 tests passing (58 config + core/state/dice tests)
|
||||||
- Config tests: 30/30 ✅
|
- Config tests: 58/58 ✅
|
||||||
- Play resolver tests: 19/19 ✅
|
- Play resolver tests: 19/19 ✅
|
||||||
- Dice tests: 34/35 (1 pre-existing)
|
- Dice tests: 34/35 (1 pre-existing)
|
||||||
- Roll types tests: 27/27 ✅
|
- Roll types tests: 27/27 ✅
|
||||||
- Core/State tests: passing ✅
|
- Validators: To be added (Task 2)
|
||||||
- Player model tests: 10/14 (pre-existing failures unrelated to Week 6)
|
|
||||||
|
**Target Test Count After Week 7**: 285+ tests (85+ new tests)
|
||||||
|
|
||||||
**Last Test Run**: All passing (2025-10-29)
|
**Last Test Run**: All passing (2025-10-29)
|
||||||
**Branch**: `implement-phase-2`
|
**Branch**: `implement-phase-2`
|
||||||
@ -390,26 +503,28 @@ After Task 1 (commit):
|
|||||||
**Virtual Env**: `backend/venv/`
|
**Virtual Env**: `backend/venv/`
|
||||||
**Database**: PostgreSQL @ 10.10.0.42:5432 (paperdynasty_dev)
|
**Database**: PostgreSQL @ 10.10.0.42:5432 (paperdynasty_dev)
|
||||||
|
|
||||||
**Key Imports for Phase 3**:
|
**Key Imports for Week 7**:
|
||||||
```python
|
```python
|
||||||
from app.config import get_league_config, PlayOutcome
|
from app.config import get_league_config, PlayOutcome
|
||||||
from app.core.dice import AbRoll
|
from app.core.dice import AbRoll
|
||||||
from app.models.game_models import DefensiveDecision, OffensiveDecision
|
from app.core.validators import game_validator, ValidationError
|
||||||
from app.models.player_models import BasePlayer, PdPlayer, SbaPlayer
|
from app.core.state_manager import state_manager
|
||||||
|
from app.core.ai_opponent import ai_opponent
|
||||||
|
from app.models.game_models import DefensiveDecision, OffensiveDecision, GameState
|
||||||
```
|
```
|
||||||
|
|
||||||
**Recent Commit History** (Last 10):
|
**Recent Commit History** (Last 10):
|
||||||
```
|
```
|
||||||
64aa800 - CLAUDE: Update implementation plans for next session (21 hours ago)
|
95d8703 - CLAUDE: Implement Week 7 Task 1 - Strategic Decision Integration (63 seconds ago)
|
||||||
5d5c13f - CLAUDE: Implement Week 6 league configuration and play outcome systems (22 hours ago)
|
d7caa75 - CLAUDE: Add manual outcome testing to terminal client and Phase 3 planning (45 minutes ago)
|
||||||
a014622 - CLAUDE: Update documentation with session improvements (30 hours ago)
|
6880b6d - CLAUDE: Complete Week 6 - granular PlayOutcome integration and metadata support (70 minutes ago)
|
||||||
1c32787 - CLAUDE: Refactor game models and modularize terminal client (30 hours ago)
|
64aa800 - CLAUDE: Update implementation plans for next session (23 hours ago)
|
||||||
aabb90f - CLAUDE: Implement player models and optimize database queries (30 hours ago)
|
5d5c13f - CLAUDE: Implement Week 6 league configuration and play outcome systems (23 hours ago)
|
||||||
|
a014622 - CLAUDE: Update documentation with session improvements (31 hours ago)
|
||||||
|
1c32787 - CLAUDE: Refactor game models and modularize terminal client (31 hours ago)
|
||||||
|
aabb90f - CLAUDE: Implement player models and optimize database queries (32 hours ago)
|
||||||
05fc037 - CLAUDE: Fix game recovery and add required field validation for plays (3 days ago)
|
05fc037 - CLAUDE: Fix game recovery and add required field validation for plays (3 days ago)
|
||||||
918bead - CLAUDE: Add interactive terminal client for game engine testing (3 days ago)
|
918bead - CLAUDE: Add interactive terminal client for game engine testing (3 days ago)
|
||||||
f9aa653 - CLAUDE: Reorganize Week 6 documentation and separate player model specifications (4 days ago)
|
|
||||||
f3238c4 - CLAUDE: Complete Week 5 testing and update documentation (4 days ago)
|
|
||||||
54092a8 - CLAUDE: Add refactor planning and session documentation (4 days ago)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@ -420,35 +535,56 @@ f3238c4 - CLAUDE: Complete Week 5 testing and update documentation (4 days ago)
|
|||||||
- **Overall project**: See `@prd-web-scorecard-1.1.md` and `@CLAUDE.md`
|
- **Overall project**: See `@prd-web-scorecard-1.1.md` and `@CLAUDE.md`
|
||||||
- **Architecture**: See `@.claude/implementation/00-index.md`
|
- **Architecture**: See `@.claude/implementation/00-index.md`
|
||||||
- **Backend guide**: See `@backend/CLAUDE.md`
|
- **Backend guide**: See `@backend/CLAUDE.md`
|
||||||
- **Phase 2 completion**: See `@.claude/implementation/02-week6-league-features.md`
|
- **Current phase details**: See `@.claude/implementation/03-gameplay-features.md`
|
||||||
- **Next phase details**: See `@.claude/implementation/03-gameplay-features.md`
|
- **Week 7 detailed plan**: See `@.claude/implementation/WEEK_7_PLAN.md`
|
||||||
|
|
||||||
**Critical files for Phase 3 planning**:
|
**Critical files for current work (Week 7 Tasks 2-7)**:
|
||||||
1. `app/core/game_engine.py` - Main orchestration
|
1. `app/core/validators.py` - Add decision validators (Task 2)
|
||||||
2. `app/core/play_resolver.py` - Outcome resolution
|
2. `app/config/result_charts.py` - Enhance result charts (Tasks 3-4)
|
||||||
3. `app/models/game_models.py` - DefensiveDecision/OffensiveDecision models
|
3. `app/core/play_resolver.py` - Add DP mechanics (Task 5)
|
||||||
4. `app/models/player_models.py` - Player ratings for card resolution
|
4. `app/websocket/handlers.py` - Add decision handlers (Task 6)
|
||||||
5. `app/config/league_configs.py` - League-specific settings
|
5. `terminal_client/commands.py` - Enhance client (Task 7)
|
||||||
|
6. `app/core/game_engine.py` - Already enhanced with decision workflow
|
||||||
|
7. `app/core/state_manager.py` - Already enhanced with decision queue
|
||||||
|
8. `app/core/ai_opponent.py` - Stub AI (enhance in Week 9)
|
||||||
|
|
||||||
**What NOT to do**:
|
**What NOT to do**:
|
||||||
- ❌ Don't modify database schema without migration
|
- ❌ Don't modify database schema without migration
|
||||||
- ❌ Don't use Python's datetime module (use Pendulum)
|
- ❌ Don't use Python's datetime module (use Pendulum)
|
||||||
- ❌ Don't return Optional unless required (Raise or Return pattern)
|
- ❌ Don't return Optional unless required (Raise or Return pattern)
|
||||||
- ❌ Don't disable type checking globally (use targeted # type: ignore)
|
- ❌ Don't disable type checking globally (use targeted # type: ignore)
|
||||||
- ❌ Don't create new files unless necessary (prefer editing existing)
|
- ❌ Don't skip validation - all decisions must be validated
|
||||||
|
- ❌ Don't implement full AI logic yet (Week 9 task)
|
||||||
- ❌ Don't commit without "CLAUDE: " prefix
|
- ❌ Don't commit without "CLAUDE: " prefix
|
||||||
|
|
||||||
**Patterns we're using**:
|
**Patterns we're using**:
|
||||||
- ✅ Pydantic dataclasses for models
|
- ✅ Pydantic dataclasses for models
|
||||||
- ✅ Async/await for all database operations
|
- ✅ Async/await for all database operations
|
||||||
- ✅ Frozen configs for immutability
|
- ✅ Frozen configs for immutability
|
||||||
- ✅ Factory methods for polymorphic players
|
- ✅ asyncio.Future for decision queue
|
||||||
- ✅ Metadata JSON for extensibility
|
- ✅ Validator pattern with clear error messages
|
||||||
- ✅ TODO comments for Phase 3 work
|
- ✅ Result chart abstraction for league-specific resolution
|
||||||
|
- ✅ TODO comments for future work (Week 9 AI, Week 8 substitutions)
|
||||||
|
|
||||||
|
**Decision Workflow Pattern**:
|
||||||
|
```python
|
||||||
|
# 1. GameEngine requests decision
|
||||||
|
decision = await game_engine.await_defensive_decision(state)
|
||||||
|
|
||||||
|
# 2a. AI team: ai_opponent generates immediately
|
||||||
|
# 2b. Human team: state_manager waits for WebSocket submission
|
||||||
|
|
||||||
|
# 3. WebSocket handler receives and submits
|
||||||
|
state_manager.submit_decision(game_id, team_id, decision)
|
||||||
|
|
||||||
|
# 4. Future resolves, GameEngine continues
|
||||||
|
# 5. Validator checks decision legality
|
||||||
|
# 6. Resolution uses decision to modify outcomes
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Estimated Time for Next Session**: 2-3 hours
|
**Estimated Time for Next Session**: 18-22 hours (Tasks 2-7)
|
||||||
**Priority**: Medium (planning phase before major development)
|
**Priority**: High (core Phase 3 functionality)
|
||||||
**Blocking Other Work**: No (Phase 2 complete, can proceed independently)
|
**Blocking Other Work**: Yes (frontend needs WebSocket handlers from Task 6)
|
||||||
**Next Milestone After This**: Phase 3 Task 1 - Strategic Decision System
|
**Next Milestone After This**: Week 8 - Substitutions + Frontend UI
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user