paper-dynasty-database/docs/POSTGRES_MIGRATION_GUIDE.md
Cal Corum 92fc101e38 Fix PostgreSQL compatibility for GROUP BY queries and aggregations
- Fix NULL handling for FK checks in stratplays.py: use x.field_id instead
  of x.field to avoid triggering FK lookups on potentially missing rows
- Cast boolean is_start to integer for SUM() - PostgreSQL cannot sum booleans
- Add missing GROUP BY clause to Decision aggregate query
- Add Case import for boolean-to-integer casting
- Update migration script with boolean/datetime column mappings
- Exclude legacy battingstat/pitchingstat tables from migration
- Add comprehensive POSTGRES_MIGRATION_GUIDE.md documentation

Tested: /plays/batting and /plays/pitching endpoints work with group_by=player
2026-01-26 21:59:25 -06:00

328 lines
9.6 KiB
Markdown

# Paper Dynasty PostgreSQL Migration Guide
## Overview
This document captures lessons learned from test migrations and provides a step-by-step guide for production deployment.
**Migration Branch:** `postgres-migration`
**Target:** PostgreSQL 17 on `sba_postgres` container
**Source:** SQLite `storage/pd_master.db`
---
## Code Changes Required
### 1. BigIntegerField for Discord IDs
Discord snowflake IDs exceed PostgreSQL INTEGER max (2.1 billion). Changed to BIGINT:
```python
# app/db_engine.py
# Line 179 - Current.live_scoreboard
live_scoreboard = BigIntegerField() # Discord channel ID
# Line 368 - Team.gmid
gmid = BigIntegerField() # Discord user ID
```
### 2. Nullable DateTimeField for Optional Timestamps
```python
# Line 715 - GauntletRun.ended
ended = DateTimeField(null=True) # NULL means run not yet ended
```
### 3. Migration Script Type Conversions
The migration script (`scripts/migrate_to_postgres.py`) handles:
**Boolean Columns** (SQLite 0/1 → PostgreSQL True/False):
- event.active
- cardset.for_purchase, in_packs, ranked_legal
- team.has_guide
- packtype.available
- result.ranked, short_game
- stratgame.ranked, short_game, forfeit
- stratplay.is_go_ahead, is_tied, is_new_inning
- decision.is_start
- battingcard.steal_auto
- notification.ack
**DateTime Columns** (Unix ms → PostgreSQL timestamp):
- pack.open_time
- battingstat.created
- pitchingstat.created
- result.created
- stratgame.created, ended
- notification.created
- gauntletrun.created, ended
- paperdex.created
- reward.created
- award.created
**Reserved Words** (quoted in SQL):
- notification.desc → "desc"
---
## Pre-Migration Data Cleanup
### Required Cleanup Steps
Run these SQL commands on SQLite before migration to maximize data integrity:
```sql
-- 1. Fix orphaned cards with invalid pack_id (set to NULL)
UPDATE card SET pack_id = NULL
WHERE pack_id IS NOT NULL
AND pack_id NOT IN (SELECT id FROM pack);
-- 2. Delete packs from deleted teams
DELETE FROM pack
WHERE team_id NOT IN (SELECT id FROM team);
-- 3. Delete cards from deleted teams
DELETE FROM card
WHERE team_id NOT IN (SELECT id FROM team);
-- 4. Delete duplicate stratplay records (keep lowest id)
DELETE FROM stratplay WHERE id NOT IN (
SELECT MIN(id) FROM stratplay GROUP BY game_id, play_num
);
-- 5. Delete roster entries for deleted cards
DELETE FROM roster
WHERE card_1 NOT IN (SELECT id FROM card)
OR card_2 NOT IN (SELECT id FROM card)
-- ... continue for all 26 card positions
;
-- 6. Delete battingstat for deleted cards
DELETE FROM battingstat
WHERE card_id NOT IN (SELECT id FROM card);
-- 7. Delete pitchingstat for deleted cards
DELETE FROM pitchingstat
WHERE card_id NOT IN (SELECT id FROM card);
-- 8. Delete stratplay for deleted games
DELETE FROM stratplay
WHERE game_id NOT IN (SELECT id FROM stratgame);
-- 9. Delete decision for deleted games
DELETE FROM decision
WHERE game_id NOT IN (SELECT id FROM stratgame);
```
### Orphan Analysis Queries
Before cleanup, run these to understand scope:
```sql
-- Count orphaned packs
SELECT COUNT(*) FROM pack WHERE team_id NOT IN (SELECT id FROM team);
-- Count orphaned cards (team)
SELECT COUNT(*) FROM card WHERE team_id NOT IN (SELECT id FROM team);
-- Count orphaned cards (pack)
SELECT COUNT(*) FROM card WHERE pack_id IS NOT NULL AND pack_id NOT IN (SELECT id FROM pack);
-- Count duplicate stratplays
SELECT COUNT(*) FROM (
SELECT game_id, play_num FROM stratplay
GROUP BY game_id, play_num HAVING COUNT(*) > 1
);
-- Count orphaned battingstats
SELECT COUNT(*) FROM battingstat WHERE card_id NOT IN (SELECT id FROM card);
-- Count orphaned pitchingstats
SELECT COUNT(*) FROM pitchingstat WHERE card_id NOT IN (SELECT id FROM card);
```
---
## Migration Execution Steps
### Phase 1: Build Docker Image (Local)
```bash
cd /mnt/NV2/Development/paper-dynasty/database
git checkout postgres-migration
docker build -t manticorum67/paper-dynasty-database:postgres-migration .
docker push manticorum67/paper-dynasty-database:postgres-migration
```
### Phase 2: Create PostgreSQL Database (Dev Server)
```bash
ssh sba-db
# Create user and database
docker exec sba_postgres psql -U sba_admin -d postgres -c \
"CREATE USER pd_admin WITH PASSWORD 'YOUR_PASSWORD';"
docker exec sba_postgres psql -U sba_admin -d postgres -c \
"CREATE DATABASE pd_master OWNER pd_admin;"
docker exec sba_postgres psql -U sba_admin -d postgres -c \
"GRANT ALL PRIVILEGES ON DATABASE pd_master TO pd_admin;"
```
### Phase 3: Create Schema
```bash
docker pull manticorum67/paper-dynasty-database:postgres-migration
docker run --rm \
--network dev-sba-database_default \
-v /home/cal/container-data/dev-sba-database/logs:/usr/src/app/logs \
-e DATABASE_TYPE=postgresql \
-e POSTGRES_HOST=sba_postgres \
-e POSTGRES_DB=pd_master \
-e POSTGRES_USER=pd_admin \
-e POSTGRES_PASSWORD='YOUR_PASSWORD' \
-e POSTGRES_PORT=5432 \
manticorum67/paper-dynasty-database:postgres-migration \
python -c "
from app.db_engine import db, Current, Rarity, Event, Cardset, MlbPlayer, Player, Team, PackType, Pack, Card, Roster, Result, BattingStat, PitchingStat, Award, Paperdex, Reward, GameRewards, Notification, GauntletReward, GauntletRun, BattingCard, BattingCardRatings, PitchingCard, PitchingCardRatings, CardPosition, StratGame, StratPlay, Decision
db.create_tables([Current, Rarity, Event, Cardset, MlbPlayer, Player, Team, PackType, Pack, Card, Roster, Result, BattingStat, PitchingStat, Award, Paperdex, Reward, GameRewards, Notification, GauntletReward, GauntletRun, BattingCard, BattingCardRatings, PitchingCard, PitchingCardRatings, CardPosition, StratGame, StratPlay, Decision])
db.close()
"
```
### Phase 4: Run Migration
```bash
docker run --rm \
--network dev-sba-database_default \
-v /path/to/storage:/usr/src/app/storage \
-v /path/to/logs:/usr/src/app/logs \
-v /path/to/scripts:/usr/src/app/scripts \
-e DATABASE_TYPE=postgresql \
-e POSTGRES_HOST=sba_postgres \
-e POSTGRES_DB=pd_master \
-e POSTGRES_USER=pd_admin \
-e POSTGRES_PASSWORD='YOUR_PASSWORD' \
-e POSTGRES_PORT=5432 \
manticorum67/paper-dynasty-database:postgres-migration \
python scripts/migrate_to_postgres.py --sqlite-path storage/pd_master.db
```
### Phase 5: Verify Migration
```sql
-- Check table counts
SELECT 'player' as tbl, COUNT(*) FROM player
UNION ALL SELECT 'team', COUNT(*) FROM team
UNION ALL SELECT 'card', COUNT(*) FROM card
UNION ALL SELECT 'stratplay', COUNT(*) FROM stratplay
ORDER BY tbl;
```
### Phase 6: Start API with PostgreSQL
```bash
docker run -d \
--name pd_postgres_api \
--network dev-sba-database_default \
-p 8100:80 \
-e DATABASE_TYPE=postgresql \
-e POSTGRES_HOST=sba_postgres \
-e POSTGRES_DB=pd_master \
-e POSTGRES_USER=pd_admin \
-e POSTGRES_PASSWORD='YOUR_PASSWORD' \
-e POSTGRES_PORT=5432 \
-e API_TOKEN=$API_TOKEN \
manticorum67/paper-dynasty-database:postgres-migration
```
---
## Test Results Summary
### Test Migration #3 (Fresh Production Copy + Minimal Cleanup)
| Metric | Value |
|--------|-------|
| Tables Successful | 23/29 (79%) |
| Records Inserted | 562,489 |
| Records Skipped | 218,387 |
| Duration | 25 minutes |
**Pre-migration cleanup performed:**
- Deleted 1,953 orphaned battingstats (invalid card_id)
- Deleted 437 orphaned pitchingstats (invalid card_id)
**Records skipped during migration (expected):**
- These are historical records from deleted teams/cards from previous seasons
- FK integrity is enforced in PostgreSQL, so orphaned records are correctly rejected
- The API functions correctly with the migrated data
### Working Endpoints (All Tested)
- /api/v2/current
- /api/v2/teams
- /api/v2/players
- /api/v2/cards
- /api/v2/cardsets
- /api/v2/games
- /api/v2/decisions
- /api/v2/plays/batting (with group_by=player)
- /api/v2/plays/pitching (with group_by=player)
### Fixed Issues (This Session)
1. **NULL handling for GROUP BY foreign keys** (`app/routers_v2/stratplays.py`)
- Changed `if x.batter``if x.batter_id` (line ~718)
- Changed `if x.batter_team``if x.batter_team_id` (line ~725)
- Changed `if x.pitcher``if x.pitcher_id` (line ~1100)
- Changed `if x.pitcher_team``if x.pitcher_team_id` (line ~1103)
- **Why:** Checking `x.foreign_key` triggers FK lookup which fails if the referenced row doesn't exist. Checking `x.foreign_key_id` only checks the raw integer value.
2. **Boolean SUM for PostgreSQL** (`app/routers_v2/stratplays.py`)
- Changed `fn.SUM(Decision.is_start)``fn.SUM(Case(None, [(Decision.is_start == True, 1)], 0))`
- **Why:** PostgreSQL cannot sum booleans directly; must cast to integer first.
3. **Missing GROUP BY on Decision query** (`app/routers_v2/stratplays.py`)
- Added `.group_by(Decision.pitcher)` to the `all_dec` query
- **Why:** PostgreSQL requires explicit GROUP BY for all non-aggregate columns (SQLite was lenient).
4. **Import Case from db_engine** (`app/routers_v2/stratplays.py`)
- Added `Case` to imports for PostgreSQL-compatible boolean aggregation
---
## Rollback Plan
To switch back to SQLite:
```bash
# Stop PostgreSQL container
docker stop pd_postgres_api
# Start with SQLite
docker run -d \
--name pd_sqlite_api \
-p 8100:80 \
-v /path/to/storage:/usr/src/app/storage \
-e DATABASE_TYPE=sqlite \
-e API_TOKEN=$API_TOKEN \
manticorum67/paper-dynasty-database:latest
```
---
## Production Deployment Checklist
- [ ] Backup production SQLite
- [ ] Run orphan analysis on production data
- [ ] Execute cleanup SQL
- [ ] Verify cleanup counts
- [ ] Schedule maintenance window
- [ ] Run migration
- [ ] Verify record counts
- [ ] Test all endpoints
- [ ] Update docker-compose.yml with PostgreSQL config
- [ ] Monitor for 24 hours
- [ ] Remove SQLite dependency (optional)