strat-gameplay-webapp/.claude/implementation/PHASE_3_OVERVIEW.md
Cal Corum a1f42a93b8 CLAUDE: Implement Phase 3A - X-Check data models and enums
Add foundational data structures for X-Check play resolution system:

Models Added:
- PositionRating: Defensive ratings (range 1-5, error 0-88) for X-Check resolution
- XCheckResult: Dataclass tracking complete X-Check resolution flow with dice rolls,
  conversions (SPD test, G2#/G3#→SI2), error results, and final outcomes
- BasePlayer.active_position_rating: Optional field for current defensive position

Enums Extended:
- PlayOutcome.X_CHECK: New outcome type requiring special resolution
- PlayOutcome.is_x_check(): Helper method for type checking

Documentation Enhanced:
- Play.check_pos: Documented as X-Check position identifier
- Play.hit_type: Documented with examples (single_2_plus_error_1, etc.)

Utilities Added:
- app/core/cache.py: Redis cache key helpers for player positions and game state

Implementation Planning:
- Complete 6-phase implementation plan (3A-3F) documented in .claude/implementation/
- Phase 3A complete with all acceptance criteria met
- Zero breaking changes, all existing tests passing

Next: Phase 3B will add defense tables, error charts, and advancement logic

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 15:32:09 -05:00

8.4 KiB
Raw Blame History

Phase 3: X-Check Play System - Implementation Overview

Feature: X-Check defensive plays with range/error resolution Total Estimated Effort: 24-31 hours Status: Ready for Implementation

Executive Summary

X-Checks are defense-dependent plays that require:

  1. Rolling 1d20 to consult defense range table (20×5)
  2. Rolling 3d6 to consult error chart
  3. Resolving SPD tests (catcher plays)
  4. Converting G2#/G3# results based on defensive positioning
  5. Determining final outcome (hit/out/error) with runner advancement
  6. Supporting three modes: PD Auto, PD/SBA Manual, SBA Semi-Auto

Phase Breakdown

Phase 3A: Data Models & Enums (2-3 hours)

File: phase-3a-data-models.md

Deliverables:

  • PositionRating model for defense/error ratings
  • XCheckResult intermediate state object
  • PlayOutcome.X_CHECK enum value
  • Redis cache key helpers

Key Files:

  • backend/app/models/player_models.py
  • backend/app/models/game_models.py
  • backend/app/config/result_charts.py
  • backend/app/core/cache.py

Phase 3B: League Config Tables (3-4 hours)

File: phase-3b-league-config-tables.md

Deliverables:

  • Defense range tables (infield, outfield, catcher)
  • Error charts (per position type)
  • Holding runner responsibility logic
  • Placeholder advancement functions

Key Files:

  • backend/app/config/common_x_check_tables.py (NEW)
  • backend/app/config/sba_config.py (updates)
  • backend/app/config/pd_config.py (updates)
  • backend/app/core/runner_advancement.py (placeholders)

Data Requirements:

  • OF error charts complete (LF/RF, CF)
  • IF error charts needed (P, C, 1B, 2B, 3B, SS) - marked TODO
  • Full holding runner chart needed - using heuristic for now

Phase 3C: X-Check Resolution Logic (4-5 hours)

File: phase-3c-resolution-logic.md

Deliverables:

  • PlayResolver._resolve_x_check() method
  • Defense table lookup
  • SPD test resolution
  • G2#/G3# conversion logic
  • Error chart lookup
  • Final outcome determination

Key Files:

  • backend/app/core/play_resolver.py

Integration Points:

  • Calls existing dice roller
  • Uses config tables from Phase 3B
  • Creates XCheckResult from Phase 3A
  • Calls advancement functions (placeholders until Phase 3D)

Phase 3D: Runner Advancement Tables (6-8 hours)

File: phase-3d-runner-advancement.md

Deliverables:

  • Groundball advancement tables (G1, G2, G3)
  • Flyball advancement tables (F1, F2, F3)
  • Hit advancement with error bonuses
  • Out advancement with error overrides
  • Complete x_check_* functions

Key Files:

  • backend/app/core/x_check_advancement_tables.py (NEW)
  • backend/app/core/runner_advancement.py (implementations)

Data Requirements:

  • Full advancement tables for all combinations:
    • (G1/G2/G3) × (on_base_code 0-7) × (defender_in True/False) × (NO/E1/E2/E3/RP)
    • (F1/F2/F3) × (on_base_code 0-7) × (NO/E1/E2/E3/RP)
  • Many tables marked TODO pending rulebook data

Phase 3E: WebSocket Events & UI Integration (5-6 hours)

File: phase-3e-websocket-events.md

Deliverables:

  • Position rating loading at lineup creation
  • Redis caching for player positions
  • Auto-resolution with Accept/Reject
  • Manual outcome selection
  • Override logging

Key Files:

  • backend/app/services/pd_api_client.py (NEW)
  • backend/app/services/lineup_service.py (NEW)
  • backend/app/websocket/game_handlers.py
  • backend/app/core/x_check_options.py (NEW)
  • backend/app/core/game_engine.py

Event Flow:

PD Auto Mode:
  1. X-Check triggered → Auto-resolve
  2. Broadcast result + Accept/Reject buttons
  3. User accepts → Apply play
  4. User rejects → Log override + Apply manual choice

SBA Manual Mode:
  1. X-Check triggered → Roll dice
  2. Broadcast dice + legal options
  3. User selects outcome
  4. Apply play

SBA Semi-Auto Mode:
  1. Same as PD Auto (if ratings provided)

Phase 3F: Testing & Integration (4-5 hours)

File: phase-3f-testing-integration.md

Deliverables:

  • Comprehensive test fixtures
  • Unit tests for all components
  • Integration tests for complete flows
  • WebSocket event tests
  • Performance validation

Key Files:

  • tests/fixtures/x_check_fixtures.py (NEW)
  • tests/core/test_x_check_resolution.py (NEW)
  • tests/integration/test_x_check_flows.py (NEW)
  • tests/websocket/test_x_check_events.py (NEW)
  • tests/performance/test_x_check_performance.py (NEW)

Coverage Goals:

  • Unit tests: >95% for X-Check code
  • Integration tests: All major flows
  • Performance: <100ms per resolution

Implementation Order

Recommended sequence:

  1. Phase 3A (foundation - models and enums)
  2. Phase 3B (config tables - can be stubbed initially)
  3. Phase 3C (core logic - works with placeholder advancement)
  4. Phase 3E (WebSocket - can test with basic scenarios)
  5. Phase 3D (advancement - fill in the complex tables)
  6. Phase 3F (testing - comprehensive validation)

Rationale: This order allows early testing with simplified advancement, then filling in complex tables later.


Critical Dependencies

External Data Needed

  1. Infield error charts (P, C, 1B, 2B, 3B, SS) - currently TODO
  2. Complete holding runner chart - currently using heuristic
  3. Full advancement tables - many marked TODO

System Dependencies

  1. Redis - must be running for position rating cache
  2. PD API - must be accessible for position rating fetch
  3. Existing runner advancement system - must be working for GroundballResultType mapping

Frontend Dependencies

  1. WebSocket client - must handle new event types:
    • x_check_auto_result
    • x_check_manual_options
    • confirm_x_check_result
    • submit_x_check_manual

Testing Strategy

Unit Testing

  • Each helper function in isolation
  • Mocked dice rolls for determinism
  • All edge cases (range 1/5, error 0/25)

Integration Testing

  • Complete flows (auto, manual, semi-auto)
  • All position types (P, C, IF, OF)
  • Error scenarios (E1, E2, E3, RP)
  • SPD test scenarios
  • Hash conversion scenarios

Performance Testing

  • Single resolution: <100ms
  • Batch (100 plays): <5s
  • No memory leaks
  • Redis caching effective

Manual Testing

  • Full game scenario (PD)
  • Full game scenario (SBA)
  • Accept/Reject flows
  • Override logging verification

Risk Assessment

High Risk

  • Incomplete data tables: Many advancement tables marked TODO
    • Mitigation: Implement placeholders, fill incrementally
  • Complex state management: Multi-step resolution with conditionals
    • Mitigation: Comprehensive unit tests, clear state transitions

Medium Risk

  • Performance: Multiple table lookups per play
    • Mitigation: Performance tests, caching where appropriate
  • Redis dependency: Position ratings require Redis
    • Mitigation: Graceful degradation, clear error messages

Low Risk

  • WebSocket complexity: Standard event patterns
    • Mitigation: Existing patterns work well
  • Database schema: Minimal changes (existing fields)
    • Mitigation: Already have check_pos and hit_type fields

Success Criteria

Functional

  • All three modes working (PD Auto, Manual, SBA)
  • Correct outcomes for all position types
  • SPD test working
  • Hash conversion working
  • Error application correct
  • Advancement accurate

Non-Functional

  • Resolution latency <100ms
  • No errors in 1000-play test
  • Position ratings cached efficiently
  • Override logging working
  • Test coverage >95%

User Experience

  • Auto mode feels responsive
  • Manual mode options clear
  • Accept/Reject flow intuitive
  • Override provides helpful feedback

Notes for Developers

  1. Import Verification: Always check imports during code review (per CLAUDE.md)
  2. Logging: Use rotating logger with f'{__name__}.<className>' pattern
  3. Error Handling: Follow "Raise or Return" - no Optional unless required
  4. Git Commits: Prefix with "CLAUDE: "
  5. Testing: Run tests freely without asking permission

Next Steps

  1. Review all 6 phase documents
  2. Confirm data table availability (infield error charts, holding runner chart)
  3. Set up Redis if not already running
  4. Begin with Phase 3A implementation
  5. Iterate through phases in recommended order

Questions or concerns? Review individual phase documents for detailed implementation steps.

Total LOC Estimate: ~2000-2500 lines (including tests) Total Files: ~15 new files + modifications to ~10 existing files