strat-gameplay-webapp/.claude/implementation/testing-strategy.md
Cal Corum 5c75b935f0 CLAUDE: Initial project setup - documentation and infrastructure
Add comprehensive project documentation and Docker infrastructure for
Paper Dynasty Real-Time Game Engine - a web-based multiplayer baseball
simulation platform replacing the legacy Google Sheets system.

Documentation Added:
- Complete PRD (Product Requirements Document)
- Project README with dual development workflows
- Implementation guide with 5-phase roadmap
- Architecture docs (backend, frontend, database, WebSocket)
- CLAUDE.md context files for each major directory

Infrastructure Added:
- Root docker-compose.yml for full stack orchestration
- Dockerfiles for backend and both frontends (multi-stage builds)
- .dockerignore files for optimal build context
- .env.example with all required configuration
- Updated .gitignore for Python, Node, Nuxt, and Docker

Project Structure:
- backend/ - FastAPI + Socket.io game engine (Python 3.11+)
- frontend-sba/ - SBA League Nuxt 3 frontend
- frontend-pd/ - PD League Nuxt 3 frontend
- .claude/implementation/ - Detailed implementation guides

Supports two development workflows:
1. Local dev (recommended): Services run natively with hot-reload
2. Full Docker: One-command stack orchestration for testing/demos

Next: Phase 1 implementation (backend/frontend foundations)
2025-10-21 16:21:13 -05:00

8.0 KiB

Testing Strategy

Status: Placeholder Cross-Cutting Concern: All Phases, Critical in Phase 5


Overview

Comprehensive testing approach covering unit, integration, E2E, load, and security testing to ensure production-ready quality.

Testing Pyramid

           /\
          /  \         E2E Tests (10%)
         /    \        - Full game flows
        /------\       - Multi-user scenarios
       /        \
      /  Integ.  \     Integration Tests (30%)
     /   Tests    \    - WebSocket flows
    /              \   - Database operations
   /----------------\
  /                  \ Unit Tests (60%)
 /   Unit Tests      \- Core logic
/______________________\- Pure functions

Test Coverage Goals

  • Overall Coverage: >80%
  • Core Game Logic: >90%
  • WebSocket Handlers: >85%
  • Database Operations: >75%
  • Frontend Components: >70%

Testing Tools

Backend

  • pytest: Unit and integration tests
  • pytest-asyncio: Async test support
  • pytest-cov: Coverage reporting
  • httpx: API client testing
  • python-socketio[client]: WebSocket testing
  • factory_boy: Test data generation
  • Faker: Fake data generation

Frontend

  • Vitest: Unit testing (fast, Vite-native)
  • Testing Library: Component testing
  • Cypress: E2E testing
  • Playwright: Alternative E2E testing
  • Lighthouse CI: Performance testing
  • axe-core: Accessibility testing

Load Testing

  • Locust: Distributed load testing
  • Artillery: Alternative load testing

Security Testing

  • Safety: Dependency vulnerability scanning
  • Bandit: Python security linting
  • OWASP ZAP: Security scanning
  • npm audit: Frontend dependency scanning

Test Organization

Backend Structure

backend/tests/
├── unit/
│   ├── test_game_engine.py
│   ├── test_state_manager.py
│   ├── test_play_resolver.py
│   ├── test_dice.py
│   └── test_validators.py
├── integration/
│   ├── test_websocket.py
│   ├── test_database.py
│   ├── test_api_client.py
│   └── test_full_turn.py
├── e2e/
│   ├── test_full_game.py
│   ├── test_reconnection.py
│   └── test_state_recovery.py
├── load/
│   └── locustfile.py
├── conftest.py          # Shared fixtures
└── factories.py         # Test data factories

Frontend Structure

frontend-{league}/
├── tests/
│   ├── unit/
│   │   ├── composables/
│   │   └── utils/
│   └── components/
│       ├── Game/
│       ├── Decisions/
│       └── Display/
└── cypress/
    ├── e2e/
    │   ├── game-flow.cy.ts
    │   ├── auth.cy.ts
    │   └── spectator.cy.ts
    └── support/

Key Test Scenarios

Unit Tests

  • Dice roll distribution (statistical validation)
  • Play outcome resolution (all d20 results)
  • State transitions
  • Input validation
  • Player model instantiation

Integration Tests

  • WebSocket event flow (connect → join → action → update)
  • Database persistence and recovery
  • API client responses
  • Multi-turn game sequences

E2E Tests

  • Complete 9-inning game
  • Game with substitutions
  • AI opponent game
  • Spectator joining active game
  • Reconnection after disconnect
  • Multiple concurrent games

Load Tests

  • 10 concurrent games
  • 50 concurrent WebSocket connections
  • Sustained load over 30 minutes
  • Spike testing (sudden load increase)

Security Tests

  • SQL injection attempts
  • XSS attempts
  • CSRF protection
  • Authorization bypass attempts
  • Rate limit enforcement

Example Test Cases

Unit Test (Backend)

# tests/unit/test_dice.py
import pytest
from app.core.dice import DiceRoller

def test_dice_roll_range():
    """Test that dice rolls are within valid range"""
    roller = DiceRoller()
    for _ in range(1000):
        roll = roller.roll_d20()
        assert 1 <= roll <= 20

def test_dice_distribution():
    """Test that dice rolls are reasonably distributed"""
    roller = DiceRoller()
    rolls = [roller.roll_d20() for _ in range(10000)]

    # Each number should appear roughly 500 times (±10%)
    for num in range(1, 21):
        count = rolls.count(num)
        assert 450 <= count <= 550

Integration Test (Backend)

# tests/integration/test_websocket.py
import pytest
import socketio

@pytest.mark.asyncio
async def test_game_action_flow(sio_client, test_game):
    """Test complete action flow through WebSocket"""

    # Connect
    await sio_client.connect('http://localhost:8000', auth={'token': 'valid-jwt'})

    # Join game
    await sio_client.emit('join_game', {'game_id': test_game.id, 'role': 'player'})

    # Wait for game state
    response = await sio_client.receive()
    assert response[0] == 'game_state_update'

    # Send defensive decision
    await sio_client.emit('set_defense', {
        'game_id': test_game.id,
        'positioning': 'standard'
    })

    # Verify decision recorded
    response = await sio_client.receive()
    assert response[0] == 'decision_recorded'

Component Test (Frontend)

// tests/components/Game/GameBoard.spec.ts
import { mount } from '@vue/test-utils'
import { describe, it, expect } from 'vitest'
import GameBoard from '@/components/Game/GameBoard.vue'

describe('GameBoard', () => {
  it('displays runners on base correctly', () => {
    const runners = {
      first: null,
      second: 12345,
      third: null
    }

    const wrapper = mount(GameBoard, {
      props: { runners }
    })

    expect(wrapper.find('[data-base="second"]').exists()).toBe(true)
    expect(wrapper.find('[data-base="first"]').exists()).toBe(false)
  })
})

E2E Test (Frontend)

// cypress/e2e/game-flow.cy.ts
describe('Complete Game Flow', () => {
  it('can play through an at-bat', () => {
    cy.login() // Custom command
    cy.visit('/games/create')

    cy.get('[data-test="create-game"]').click()
    cy.get('[data-test="game-mode-live"]').click()
    cy.get('[data-test="start-game"]').click()

    // Wait for game to start
    cy.get('[data-test="game-board"]').should('be.visible')

    // Make defensive decision
    cy.get('[data-test="defense-standard"]').click()
    cy.get('[data-test="confirm-decision"]').click()

    // Verify decision recorded
    cy.get('[data-test="waiting-indicator"]').should('be.visible')
  })
})

Continuous Integration

GitHub Actions Example

name: Test Suite

on: [push, pull_request]

jobs:
  backend-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: '3.11'
      - run: pip install -r backend/requirements-dev.txt
      - run: pytest backend/tests/ --cov --cov-report=xml
      - uses: codecov/codecov-action@v3

  frontend-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '18'
      - run: npm ci
      - run: npm run test:unit
      - run: npm run test:e2e

Test Data Management

Fixtures

  • Shared fixtures in conftest.py
  • Factory pattern for creating test objects
  • Database seeding for integration tests
  • Mock data for external APIs

Test Isolation

  • Each test should be independent
  • Database rollback after each test
  • Clean in-memory state between tests
  • No shared mutable state

Performance Testing

Metrics to Track

  • Response time (p50, p95, p99)
  • Throughput (requests/second)
  • Error rate
  • CPU and memory usage
  • Database query time

Load Test Scenarios

  1. Normal Load: 5 concurrent games
  2. Peak Load: 10 concurrent games
  3. Stress Test: 20 concurrent games (breaking point)
  4. Spike Test: 2 → 10 games instantly

Reference Documents


Note: This is a placeholder to be expanded with specific test implementations during development.