feat: add ProcessedGame ledger for full idempotency in update_season_stats() #105

Closed
opened 2026-03-18 05:03:50 +00:00 by Claude · 1 comment
Collaborator

Background

update_season_stats() in app/services/season_stats.py guards against double-counting by checking whether any PlayerSeasonStats row still carries the incoming game_id as last_game. This works for same-game immediate replay but fails for out-of-order re-delivery.

Problem

If game G+1 is processed before game G is re-delivered, no row will carry last_game = G anymore (it was overwritten to G+1). The guard returns already_processed = False and game G's stats are double-counted silently. Evolution tier scoring would then compute from inflated stats with no error raised.

Flagged in PR #104 review.

Proposed Fix

Add a ProcessedGame model (a processed-game ledger keyed on game_id) and perform an atomic INSERT + check there instead of relying on last_game:

class ProcessedGame(Model):
    game = ForeignKeyField(StratGame, primary_key=True)
    processed_at = DateTimeField(default=datetime.now)

    class Meta:
        table_name = "processed_game"

In update_season_stats():

with db.atomic():
    created = ProcessedGame.get_or_create(game_id=game_id)[1]
    if not created:
        return {..., "skipped": True}
    # ... rest of processing

This is safe under concurrent access for both PostgreSQL (serializable INSERT) and SQLite (single-writer).

Migration needed

CREATE TABLE IF NOT EXISTS processed_game (
    game_id INTEGER PRIMARY KEY REFERENCES stratgame(id) ON DELETE CASCADE,
    processed_at TIMESTAMP NOT NULL DEFAULT NOW()
);

Acceptance criteria

  • ProcessedGame model added to db_engine.py
  • Migration SQL added
  • update_season_stats() uses ledger for idempotency check
  • New test: test_out_of_order_replay_prevented — process game G+1, then re-deliver game G, assert stats not double-counted
  • Existing test_double_count_prevention still passes
## Background `update_season_stats()` in `app/services/season_stats.py` guards against double-counting by checking whether any `PlayerSeasonStats` row still carries the incoming `game_id` as `last_game`. This works for same-game immediate replay but fails for out-of-order re-delivery. ## Problem If game G+1 is processed before game G is re-delivered, no row will carry `last_game = G` anymore (it was overwritten to G+1). The guard returns `already_processed = False` and game G's stats are double-counted silently. Evolution tier scoring would then compute from inflated stats with no error raised. Flagged in PR #104 review. ## Proposed Fix Add a `ProcessedGame` model (a processed-game ledger keyed on `game_id`) and perform an atomic INSERT + check there instead of relying on `last_game`: ```python class ProcessedGame(Model): game = ForeignKeyField(StratGame, primary_key=True) processed_at = DateTimeField(default=datetime.now) class Meta: table_name = "processed_game" ``` In `update_season_stats()`: ```python with db.atomic(): created = ProcessedGame.get_or_create(game_id=game_id)[1] if not created: return {..., "skipped": True} # ... rest of processing ``` This is safe under concurrent access for both PostgreSQL (serializable INSERT) and SQLite (single-writer). ## Migration needed ```sql CREATE TABLE IF NOT EXISTS processed_game ( game_id INTEGER PRIMARY KEY REFERENCES stratgame(id) ON DELETE CASCADE, processed_at TIMESTAMP NOT NULL DEFAULT NOW() ); ``` ## Acceptance criteria - [ ] `ProcessedGame` model added to `db_engine.py` - [ ] Migration SQL added - [ ] `update_season_stats()` uses ledger for idempotency check - [ ] New test: `test_out_of_order_replay_prevented` — process game G+1, then re-deliver game G, assert stats not double-counted - [ ] Existing `test_double_count_prevention` still passes
Claude added the
ai-working
label 2026-03-18 06:01:07 +00:00
Claude removed the
ai-working
label 2026-03-18 06:06:07 +00:00
Author
Collaborator

PR #106 opened: #106

Approach: Added a ProcessedGame ledger table (PK = game_id FK to stratgame). In update_season_stats(), replaced the last_game FK check with ProcessedGame.get_or_create(game_id=game_id) inside the existing db.atomic() block. The atomic INSERT means the first call for any game_id succeeds and proceeds; all subsequent calls — including out-of-order re-delivery after game G+1 has been processed — find the existing ledger row and return skipped=True without touching stats.

PR #106 opened: https://git.manticorum.com/cal/paper-dynasty-database/pulls/106 **Approach**: Added a `ProcessedGame` ledger table (PK = `game_id` FK to `stratgame`). In `update_season_stats()`, replaced the `last_game` FK check with `ProcessedGame.get_or_create(game_id=game_id)` inside the existing `db.atomic()` block. The atomic INSERT means the first call for any `game_id` succeeds and proceeds; all subsequent calls — including out-of-order re-delivery after game G+1 has been processed — find the existing ledger row and return `skipped=True` without touching stats.
Claude added the
ai-pr-opened
label 2026-03-18 06:06:15 +00:00
cal closed this issue 2026-03-18 20:30:37 +00:00
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: cal/paper-dynasty-database#105
No description provided.