Compare commits

...

134 Commits

Author SHA1 Message Date
cal
ad5d5561c6 Merge pull request 'fix: refractor card art post-merge fixes — cache bypass, template guards, dev server' (#180) from fix/refractor-card-art-followup into main
All checks were successful
Build Docker Image / build (push) Successful in 8m29s
2026-04-04 17:41:05 +00:00
Cal Corum
dc9269eeed fix: refractor card art post-merge fixes — cache bypass, template guards, dev server
- Skip PNG cache when ?tier= param is set to prevent serving stale T0 images
- Move {% if %} guard before diamond_colors dict in player_card.html
- Extract base #fullCard styles outside refractor conditional in tier_style.html
- Make run-local.sh DB host configurable, clean up Playwright check

Follow-up to PR #179

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 12:37:30 -05:00
cal
3e84a06b23 Merge pull request 'feat: refractor tier-specific card art rendering' (#179) from feature/refractor-card-art into main 2026-04-04 17:33:36 +00:00
Cal Corum
d92ab86aa7 fix: visual tuning from live preview — diamond position, borders, corners, header z-index
- Move diamond left to align bottom point with center column divider
- Keep all border widths uniform across tiers (remove T4 bold borders)
- Remove corner accents entirely (T4 differentiated by glow + prismatic)
- Fix T4 header z-index: don't override position on absolutely-positioned
  topright stat elements (stealing, running, bunting, hit & run)
- Add ?tier= query param for dev preview of tier styling on base cards
- Add run-local.sh for local API testing against dev database
- Add .env.local and .run-local.pid to .gitignore

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 11:20:05 -05:00
Cal Corum
830e703e76 fix: address PR #179 review — consolidate CSS, extract inline styles, add tests
- Consolidate T3 duplicate #header rule into single block with overflow/position
- Add explicit T2 #resultHeader border-bottom-width (4px) for clarity
- Move diamond quad filled box-shadow from inline styles to .diamond-quad.filled CSS rule
- Add TestResolveTier: 6 parametrized tests covering tier roundtrip, base card, unknown variant

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 00:43:27 -05:00
Cal Corum
b32e19a4ac feat: add refractor tier-specific card art rendering
Implement tier-aware visual styling for card PNG rendering (T0-T4).
Each refractor tier gets distinct borders, header backgrounds, column
header gradients, diamond tier indicators, and decorative effects.

- New tier_style.html template: per-tier CSS overrides (borders, headers,
  gradients, inset glow, diamond positioning, corner accents)
- Diamond indicator: 2x2 CSS grid rotated 45deg at header/result boundary,
  progressive fill (1B→2B→3B→Home) with tier-specific colors
- T4 Superfractor: bold gold borders, dual gold-teal glow, corner accents,
  purple diamond with glow pulse animation
- resolve_refractor_tier() helper: pure-math tier lookup from variant hash
- T3/T4 animations defined but paused for static PNG capture (APNG follow-up)

Relates-to: initiative #19

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 00:14:33 -05:00
cal
ffe07ec54c Merge pull request 'fix: auto-initialize RefractorCardState in evaluate-game' (#178) from fix/refractor-auto-init-missing-states into main
All checks were successful
Build Docker Image / build (push) Successful in 8m15s
2026-03-31 06:25:41 +00:00
Cal Corum
add175e528 fix: auto-initialize RefractorCardState in evaluate-game for legacy cards
Cards created before the refractor system was deployed have no
RefractorCardState row. Previously evaluate-game silently skipped these
players. Now it calls initialize_card_refractor on-the-fly so any card
used in a game gets refractor tracking regardless of when it was created.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 01:22:37 -05:00
cal
31c86525de Merge pull request 'feat: Refractor Phase 2 integration — wire boost into evaluate-game' (#177) from feature/refractor-phase2-integration into main
All checks were successful
Build Docker Image / build (push) Successful in 8m13s
2026-03-30 18:17:29 +00:00
Cal Corum
7f17c9b9f2 fix: address PR #177 review — move import os to top-level, add audit idempotency guard
- Move `import os` from inside evaluate_game() to module top-level imports
  (lazy imports are only for circular dependency avoidance)
- Add get_or_none idempotency guard before RefractorBoostAudit.create()
  inside db.atomic() to prevent IntegrityError on UNIQUE(card_state, tier)
  constraint in PostgreSQL when apply_tier_boost is called twice for the
  same tier
- Update atomicity test stub to provide card_state/tier attributes for
  the new Peewee expression in the idempotency guard

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 13:16:27 -05:00
Cal Corum
6a176af7da feat: Refractor Phase 2 integration — wire boost into evaluate-game
When a card reaches a new Refractor tier during game evaluation, the
system now creates a boosted variant card with modified ratings. This
connects the Phase 2 Foundation pure functions (PR #176) to the live
evaluate-game endpoint.

Key changes:
- evaluate_card() gains dry_run parameter so apply_tier_boost() is the
  sole writer of current_tier, ensuring atomicity with variant creation
- apply_tier_boost() orchestrates the full boost flow: source card
  lookup, boost application, variant card + ratings creation, audit
  record, and atomic state mutations inside db.atomic()
- evaluate_game() calls evaluate_card(dry_run=True) then loops through
  intermediate tiers on tier-up, with error isolation per player
- Display stat helpers compute fresh avg/obp/slg for variant cards
- REFRACTOR_BOOST_ENABLED env var provides a kill switch
- 51 new tests: unit tests for display stats, integration tests for
  orchestration, HTTP endpoint tests for multi-tier jumps, pitcher
  path, kill switch, atomicity, idempotency, and cross-player isolation
- Clarified all "79-sum" references to note the 108-total card invariant

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 13:04:52 -05:00
cal
70f984392d Merge pull request 'feat: Refractor Phase 2 foundation — boost functions, schema, tests' (#176) from feature/refractor-phase2-foundation into main 2026-03-30 16:11:07 +00:00
Cal Corum
a7d02aeb10 style: remove redundant parentheses on boost_delta_json declaration
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 11:07:19 -05:00
Cal Corum
776f1a5302 fix: address PR review findings — rename evolution_tier to refractor_tier
- Rename `evolution_tier` parameter to `refractor_tier` in compute_variant_hash()
  to match the refractor naming convention established in PR #131
- Update hash input dict key accordingly (safe: function is new, no stored hashes)
- Update test docstrings referencing the old parameter name
- Remove redundant parentheses on boost_delta_json TextField declaration

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 11:06:38 -05:00
Cal Corum
4a1251a734 feat: add Refractor Phase 2 foundation — boost functions, schema, tests
Pure functions for computing boosted card ratings when a player
reaches a new Refractor tier. Batter boost applies fixed +0.5 to
four offensive columns per tier; pitcher boost uses a 1.5 TB-budget
priority algorithm. Both preserve the 108-sum invariant.

- Create refractor_boost.py with apply_batter_boost, apply_pitcher_boost,
  and compute_variant_hash (Decimal arithmetic, zero-floor truncation)
- Add RefractorBoostAudit model, Card.variant, BattingCard/PitchingCard
  image_url, RefractorCardState.variant fields to db_engine.py
- Add migration SQL for refractor_card_state.variant column and
  refractor_boost_audit table (JSONB, UNIQUE constraint, transactional)
- 26 unit tests covering 108-sum invariant, deltas, truncation, TB
  accounting, determinism, x-check protection, and variant hash behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 13:39:03 -05:00
cal
c2c978ac47 Merge pull request 'feat: add evaluated_only filter to GET /api/v2/refractor/cards (#174)' (#175) from issue/174-get-api-v2-refractor-cards-add-evaluated-only-filt into main
All checks were successful
Build Docker Image / build (push) Successful in 8m11s
2026-03-25 22:53:05 +00:00
Cal Corum
537eabcc4d feat: add evaluated_only filter to GET /api/v2/refractor/cards (#174)
Closes #174

Adds `evaluated_only: bool = Query(default=True)` to `list_card_states()`.
When True (the default), cards with `last_evaluated_at IS NULL` are excluded —
these are placeholder rows created at pack-open time but never run through the
evaluator. At team scale this eliminates ~2739 zero-value rows from the
default response, making the Discord /refractor status command efficient
without any bot-side changes.

Set `evaluated_only=false` to include all rows (admin/pipeline use case).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 17:32:59 -05:00
cal
7e7ff960e2 Merge pull request 'feat: add limit/pagination to paperdex endpoint (#143)' (#167) from issue/143-feat-add-limit-pagination-to-paperdex-endpoint into main
All checks were successful
Build Docker Image / build (push) Successful in 7m53s
2026-03-25 14:52:57 +00:00
cal
792c6b96f9 Merge pull request 'feat: add limit/pagination to cardpositions endpoint (#142)' (#168) from issue/142-feat-add-limit-pagination-to-cardpositions-endpoin into main 2026-03-25 14:52:55 +00:00
cal
2c077d0fd3 Merge branch 'main' into issue/143-feat-add-limit-pagination-to-paperdex-endpoint 2026-03-25 14:52:41 +00:00
cal
3d0c99b183 Merge branch 'main' into issue/142-feat-add-limit-pagination-to-cardpositions-endpoin 2026-03-25 14:52:34 +00:00
cal
eefd4afa37 Merge pull request 'feat: add GET /api/v2/refractor/cards list endpoint (#172)' (#173) from issue/172-feat-add-get-api-v2-refractor-cards-list-endpoint into main 2026-03-25 14:52:24 +00:00
Cal Corum
0b5d0b474b feat: add GET /api/v2/refractor/cards list endpoint (#172)
Closes #172

- New GET /api/v2/refractor/cards endpoint in refractor router with
  team_id (required), card_type, tier, season, progress, limit, offset filters
- season filter uses EXISTS subquery against batting/pitching_season_stats
- progress=close filter uses CASE expression to compare current_value
  against next tier threshold (>= 80%)
- LEFT JOIN on Player so deleted players return player_name: null
- Sorting: current_tier DESC, current_value DESC
- count reflects total matching rows before pagination
- Extended _build_card_state_response() with progress_pct (computed) and
  optional player_name; single-card endpoint gains progress_pct automatically
- Added non-unique team_id index on refractor_card_state in db_engine.py
- Migration: 2026-03-25_add_refractor_card_state_team_index.sql
- Removed pre-existing unused RefractorTrack import in evaluate_game (ruff)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 01:35:18 -05:00
cal
de9b511ae9 Merge pull request 'test: refractor system Tier 3 test coverage' (#171) from test/refractor-tier3 into main
All checks were successful
Build Docker Image / build (push) Successful in 8m11s
2026-03-25 04:13:17 +00:00
Cal Corum
906d6e575a test: add Tier 3 refractor test cases (T3-1, T3-6, T3-7, T3-8)
Adds four Tier 3 (medium-priority) test cases to the existing refractor test
suite.  All tests use SQLite in-memory databases and run without a PostgreSQL
connection.

T3-1 (test_refractor_track_api.py): Two tests verifying that
  GET /api/v2/refractor/tracks?card_type= returns 200 with count=0 for both
  an unrecognised card_type value ('foo') and an empty string, rather than
  a 4xx/5xx.  A full SQLite-backed TestClient is added to the track API test
  module for these cases.

T3-6 (test_refractor_state_api.py): Verifies that
  GET /api/v2/refractor/cards/{card_id} returns last_evaluated_at: null (not
  a crash or missing key) when the RefractorCardState was initialised but
  never evaluated.  Adds the SQLite test infrastructure (models, fixtures,
  helper factories, TestClient) to the state API test module.

T3-7 (test_refractor_evaluator.py): Two tests covering fully_evolved/tier
  mismatch correction.  When the database has fully_evolved=True but
  current_tier=3 (corruption), evaluate_card must re-derive fully_evolved
  from the freshly-computed tier (False for tier 3, True for tier 4).

T3-8 (test_refractor_evaluator.py): Two tests confirming per-team stat
  isolation.  A player with BattingSeasonStats on two different teams must
  have each team's RefractorCardState reflect only that team's stats — not
  a combined total.  Covers both same-season and multi-season scenarios.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 21:38:25 -05:00
cal
74284fe5a3 Merge pull request 'test: refractor system Tier 1+2 test coverage' (#170) from test/refractor-tier1-tier2 into main 2026-03-24 21:18:13 +00:00
Cal Corum
569dc53c00 test: add Tier 1 and Tier 2 refractor system test cases
Implements all gap tests identified in the PO review for the refractor
card progression system (Phase 1 foundation).

TIER 1 (critical):
- T1-1: Negative singles guard in compute_batter_value — documents that
  hits=1, doubles=1, triples=1 produces singles=-1 and flows through
  unclamped (value=8.0, not 10.0)
- T1-2: SP tier boundary precision with floats — outs=29 (IP=9.666) stays
  T0, outs=30 (IP=10.0) promotes to T1; also covers T2 float boundary
- T1-3: evaluate-game with non-existent game_id returns 200 with empty results
- T1-4: Seed threshold ordering + positivity invariant (t1<t2<t3<t4, all >0)

TIER 2 (high):
- T2-1: fully_evolved=True persists when stats are zeroed or drop below
  previous tier — no-regression applies to both tier and fully_evolved flag
- T2-2: Parametrized edge cases for _determine_card_type: DH, C, 2B, empty
  string, None, and compound "SP/RP" (resolves to "sp", SP checked first)
- T2-3: evaluate-game with zero StratPlay rows returns empty batch result
- T2-4: GET /teams/{id}/refractors with valid team and zero states is empty
- T2-5: GET /teams/99999/refractors documents 200+empty (no team existence check)
- T2-6: POST /cards/{id}/evaluate with zero season stats stays at T0 value=0.0
- T2-9: Per-player error isolation — patches source module so router's local
  from-import picks up the patched version; one failure, one success = evaluated=1
- T2-10: Each card_type has exactly one RefractorTrack after seeding

All 101 tests pass (15 PostgreSQL-only tests skip without POSTGRES_HOST).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 09:02:30 -05:00
cal
bc6c23ef2e Merge pull request 'feat: capture total_count before limit across all paginated endpoints' (#169) from enhancement/total-count-pagination into main
All checks were successful
Build Docker Image / build (push) Successful in 8m22s
2026-03-24 12:45:53 +00:00
Cal Corum
1e21894898 fix: skip total_count query for CSV requests and consolidate rewards.py counts
- Guard total_count with `if not csv` ternary to avoid unnecessary
  COUNT query on CSV export paths (10 files)
- Consolidate rewards.py from 3 COUNT queries to 1 (used for both
  empty-check and response)
- Clean up scout_claims.py double `if limit is not None` block
- Normalize scout_opportunities.py from max(1,...) to max(0,...)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 07:43:14 -05:00
Cal Corum
54dccd1981 feat: capture total_count before limit across all paginated endpoints
Ensures the `count` field in JSON responses reflects total matching
records rather than the page size, consistent with the notifications
endpoint pattern from PR #150.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 07:37:26 -05:00
Cal Corum
8af43273d2 feat: add limit/pagination to cardpositions endpoint (#142)
Closes #142

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 07:31:59 -05:00
cal
a3ca22b277 Merge pull request 'feat: add limit/pagination to notifications endpoint (#140)' (#150) from issue/140-feat-add-limit-pagination-to-notifications-endpoin into main 2026-03-24 12:13:06 +00:00
cal
268b81aea4 Merge branch 'main' into issue/140-feat-add-limit-pagination-to-notifications-endpoin 2026-03-24 12:12:53 +00:00
cal
b14907d018 Merge pull request 'feat: add limit/pagination to gauntletrewards endpoint (#145)' (#165) from issue/145-feat-add-limit-pagination-to-gauntletrewards-endpo into main 2026-03-24 12:12:32 +00:00
cal
ff132564c2 Merge branch 'main' into issue/145-feat-add-limit-pagination-to-gauntletrewards-endpo 2026-03-24 12:12:07 +00:00
cal
66505915a7 fix: capture total_count before applying limit so response count reflects matching records not page size 2026-03-24 12:11:18 +00:00
cal
f2ff85556a Merge pull request 'feat: add limit/pagination to pitstats endpoint (#134)' (#158) from issue/134-feat-add-limit-pagination-to-pitstats-endpoint into main 2026-03-24 12:10:59 +00:00
cal
dbd61a6957 Merge branch 'main' into issue/134-feat-add-limit-pagination-to-pitstats-endpoint 2026-03-24 12:10:37 +00:00
cal
11794d8c2a Merge pull request 'feat: add limit/pagination to rewards endpoint (#139)' (#152) from issue/139-feat-add-limit-pagination-to-rewards-endpoint into main 2026-03-24 12:09:58 +00:00
cal
85e8b3f37b Merge branch 'main' into issue/139-feat-add-limit-pagination-to-rewards-endpoint 2026-03-24 12:09:54 +00:00
cal
f2e10bcf2f Merge pull request 'feat: add limit/pagination to events endpoint (#147)' (#156) from issue/147-feat-add-limit-pagination-to-events-endpoint into main 2026-03-24 12:09:45 +00:00
cal
88c0d0cc13 Merge branch 'main' into issue/134-feat-add-limit-pagination-to-pitstats-endpoint 2026-03-24 12:09:43 +00:00
cal
457189fcd8 Merge branch 'main' into issue/145-feat-add-limit-pagination-to-gauntletrewards-endpo 2026-03-24 12:09:31 +00:00
cal
c64f389d64 Merge branch 'main' into issue/147-feat-add-limit-pagination-to-events-endpoint 2026-03-24 12:09:22 +00:00
cal
67af5cd94a Merge branch 'main' into issue/139-feat-add-limit-pagination-to-rewards-endpoint 2026-03-24 12:08:51 +00:00
cal
15ee0764d6 Merge branch 'main' into issue/134-feat-add-limit-pagination-to-pitstats-endpoint 2026-03-24 12:08:49 +00:00
cal
ed35773dd0 Merge pull request 'feat: add limit/pagination to gauntletruns endpoint (#146)' (#160) from issue/146-feat-add-limit-pagination-to-gauntletruns-endpoint into main 2026-03-24 12:08:47 +00:00
cal
c1a8808cd3 Merge pull request 'feat: add limit/pagination to pitchingcardratings endpoint (#136)' (#161) from issue/136-feat-add-limit-pagination-to-pitchingcardratings-e into main 2026-03-24 12:08:43 +00:00
cal
23b95f2d3d Merge branch 'main' into issue/139-feat-add-limit-pagination-to-rewards-endpoint 2026-03-24 12:08:39 +00:00
cal
ae2e7320c5 Merge branch 'main' into issue/147-feat-add-limit-pagination-to-events-endpoint 2026-03-24 12:08:37 +00:00
Cal Corum
e7fcf611da feat: add limit/pagination to gauntletruns endpoint (#146)
Closes #146

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 07:08:37 -05:00
cal
2a1d017aa6 Merge branch 'main' into issue/136-feat-add-limit-pagination-to-pitchingcardratings-e 2026-03-24 12:08:28 +00:00
cal
ac8ec4b283 fix: clamp limit to 0 minimum to prevent negative limit values 2026-03-24 12:08:15 +00:00
cal
6037d3b53f Merge pull request 'feat: add limit/pagination to stratgame (games) endpoint (#138)' (#164) from issue/138-feat-add-limit-pagination-to-stratgame-games-endpo into main 2026-03-24 12:08:14 +00:00
cal
ae834692aa Merge branch 'main' into issue/136-feat-add-limit-pagination-to-pitchingcardratings-e 2026-03-24 12:08:13 +00:00
cal
623e93c38a Merge branch 'main' into issue/138-feat-add-limit-pagination-to-stratgame-games-endpo 2026-03-24 12:08:04 +00:00
cal
da21e83f0f Merge pull request 'feat: add limit/pagination to results endpoint (#137)' (#163) from issue/137-feat-add-limit-pagination-to-results-endpoint into main 2026-03-24 12:08:00 +00:00
cal
3bbf364a74 Merge branch 'main' into issue/136-feat-add-limit-pagination-to-pitchingcardratings-e 2026-03-24 12:07:58 +00:00
cal
01482519b5 Merge branch 'main' into issue/134-feat-add-limit-pagination-to-pitstats-endpoint 2026-03-24 12:07:48 +00:00
cal
5d2a78749f Merge branch 'main' into issue/147-feat-add-limit-pagination-to-events-endpoint 2026-03-24 12:07:47 +00:00
cal
556d18f64c Merge branch 'main' into issue/137-feat-add-limit-pagination-to-results-endpoint 2026-03-24 12:07:37 +00:00
cal
11e8fba6c5 Merge pull request 'feat: add limit/pagination to scout_opportunities endpoint (#148)' (#154) from issue/148-feat-add-limit-pagination-to-scout-opportunities-e into main 2026-03-24 12:07:32 +00:00
cal
849d14a1ec Merge pull request 'feat: add limit/pagination to awards endpoint (#132)' (#157) from issue/132-feat-add-limit-pagination-to-awards-endpoint into main 2026-03-24 12:07:29 +00:00
cal
d470a132e2 Merge branch 'main' into issue/138-feat-add-limit-pagination-to-stratgame-games-endpo 2026-03-24 12:07:22 +00:00
cal
cc98a3b368 Merge branch 'main' into issue/136-feat-add-limit-pagination-to-pitchingcardratings-e 2026-03-24 12:07:18 +00:00
cal
eebef62bc9 Merge branch 'main' into issue/147-feat-add-limit-pagination-to-events-endpoint 2026-03-24 12:07:15 +00:00
cal
a23757bb8e Merge branch 'main' into issue/132-feat-add-limit-pagination-to-awards-endpoint 2026-03-24 12:07:10 +00:00
cal
aeb37c20f2 Merge branch 'main' into issue/148-feat-add-limit-pagination-to-scout-opportunities-e 2026-03-24 12:06:57 +00:00
cal
7ca8e48004 Merge branch 'main' into issue/134-feat-add-limit-pagination-to-pitstats-endpoint 2026-03-24 12:06:55 +00:00
cal
0c7a133906 Merge branch 'main' into issue/145-feat-add-limit-pagination-to-gauntletrewards-endpo 2026-03-24 12:06:54 +00:00
cal
01c7b19137 Merge branch 'main' into issue/137-feat-add-limit-pagination-to-results-endpoint 2026-03-24 12:06:51 +00:00
cal
9a320a0e4e Merge pull request 'feat: add limit/pagination to mlbplayers endpoint (#141)' (#162) from issue/141-feat-add-limit-pagination-to-mlbplayers-endpoint into main 2026-03-24 12:06:48 +00:00
cal
67bcaa1b9b Merge branch 'main' into issue/136-feat-add-limit-pagination-to-pitchingcardratings-e 2026-03-24 12:06:48 +00:00
cal
77179d3c9c fix: clamp limit lower bound to 1 to prevent silent empty responses
Addresses reviewer feedback: max(0,...) admitted limit=0 which would
silently return no results even when matching records exist.
Changed to max(1,...) consistent with feedback on PRs #149 and #152.
2026-03-24 12:06:37 +00:00
cal
fe4d22f28a Merge branch 'main' into issue/132-feat-add-limit-pagination-to-awards-endpoint 2026-03-24 12:06:35 +00:00
cal
3a65d84682 Merge branch 'main' into issue/141-feat-add-limit-pagination-to-mlbplayers-endpoint 2026-03-24 12:06:30 +00:00
cal
c39360fa57 Merge pull request 'feat: add limit/pagination to gamerewards endpoint (#144)' (#166) from issue/144-feat-add-limit-pagination-to-gamerewards-endpoint into main 2026-03-24 12:06:22 +00:00
cal
9a8558db3a Merge branch 'main' into issue/136-feat-add-limit-pagination-to-pitchingcardratings-e 2026-03-24 12:06:08 +00:00
cal
dbc473b1b5 Merge branch 'main' into issue/139-feat-add-limit-pagination-to-rewards-endpoint 2026-03-24 12:05:54 +00:00
cal
c34ae56ed1 Merge branch 'main' into issue/134-feat-add-limit-pagination-to-pitstats-endpoint 2026-03-24 12:05:54 +00:00
cal
8f29c34985 Merge branch 'main' into issue/137-feat-add-limit-pagination-to-results-endpoint 2026-03-24 12:05:53 +00:00
cal
e79fe8384f Merge branch 'main' into issue/145-feat-add-limit-pagination-to-gauntletrewards-endpo 2026-03-24 12:05:51 +00:00
cal
38a06ca4e9 Merge branch 'main' into issue/132-feat-add-limit-pagination-to-awards-endpoint 2026-03-24 12:05:50 +00:00
cal
042392ca18 Merge branch 'main' into issue/144-feat-add-limit-pagination-to-gamerewards-endpoint 2026-03-24 12:05:48 +00:00
cal
e696a9af1a Merge branch 'main' into issue/138-feat-add-limit-pagination-to-stratgame-games-endpo 2026-03-24 12:05:43 +00:00
cal
5fba31a325 Merge pull request 'feat: add limit/pagination to battingcardratings endpoint (#135)' (#159) from issue/135-feat-add-limit-pagination-to-battingcardratings-en into main 2026-03-24 12:05:36 +00:00
cal
13e4a1a956 Merge branch 'main' into issue/147-feat-add-limit-pagination-to-events-endpoint 2026-03-24 12:05:33 +00:00
cal
7b764e8821 Merge branch 'main' into issue/134-feat-add-limit-pagination-to-pitstats-endpoint 2026-03-24 12:05:07 +00:00
cal
fd5d44c3ce Merge branch 'main' into issue/144-feat-add-limit-pagination-to-gamerewards-endpoint 2026-03-24 12:05:03 +00:00
cal
7c69a56b30 Merge branch 'main' into issue/141-feat-add-limit-pagination-to-mlbplayers-endpoint 2026-03-24 12:05:01 +00:00
cal
a5857ff353 Merge branch 'main' into issue/145-feat-add-limit-pagination-to-gauntletrewards-endpo 2026-03-24 12:04:58 +00:00
cal
2a2a20b1d7 Merge branch 'main' into issue/139-feat-add-limit-pagination-to-rewards-endpoint 2026-03-24 12:04:57 +00:00
cal
425b602c1c Merge branch 'main' into issue/137-feat-add-limit-pagination-to-results-endpoint 2026-03-24 12:04:55 +00:00
cal
6b5fe3a440 Merge branch 'main' into issue/136-feat-add-limit-pagination-to-pitchingcardratings-e 2026-03-24 12:04:53 +00:00
cal
2486d85112 Merge branch 'main' into issue/138-feat-add-limit-pagination-to-stratgame-games-endpo 2026-03-24 12:04:53 +00:00
cal
cf668703ed Merge branch 'main' into issue/135-feat-add-limit-pagination-to-battingcardratings-en 2026-03-24 12:04:49 +00:00
cal
f784963f79 Merge branch 'main' into issue/132-feat-add-limit-pagination-to-awards-endpoint 2026-03-24 12:04:44 +00:00
cal
630b334528 Merge pull request 'feat: add limit/pagination to batstats endpoint (#133)' (#155) from issue/133-feat-add-limit-pagination-to-batstats-endpoint into main 2026-03-24 12:04:40 +00:00
cal
58f408020a Merge pull request 'feat: add limit/pagination to scout_claims endpoint (#149)' (#151) from issue/149-feat-add-limit-pagination-to-scout-claims-endpoint into main 2026-03-24 12:04:38 +00:00
cal
ecc37c1df2 Merge branch 'main' into issue/139-feat-add-limit-pagination-to-rewards-endpoint 2026-03-24 12:04:32 +00:00
cal
a481c5361a Merge branch 'main' into issue/149-feat-add-limit-pagination-to-scout-claims-endpoint 2026-03-24 12:04:26 +00:00
Cal Corum
87c200d62b feat: add limit/pagination to paperdex endpoint (#143)
Closes #143

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 07:02:47 -05:00
Cal Corum
9d471ec1de feat: add limit/pagination to gamerewards endpoint (#144)
Closes #144

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 06:32:10 -05:00
Cal Corum
2da984f1eb feat: add limit/pagination to gauntletrewards endpoint (#145)
Closes #145

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 06:02:02 -05:00
Cal Corum
4f693b1228 feat: add limit/pagination to stratgame (games) endpoint (#138)
Closes #138

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 05:32:28 -05:00
Cal Corum
15aac6cb73 feat: add limit/pagination to results endpoint (#137)
Closes #137

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 05:02:35 -05:00
Cal Corum
9391591263 feat: add limit/pagination to mlbplayers endpoint (#141)
Closes #141

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 04:32:15 -05:00
Cal Corum
2f56942721 feat: add limit/pagination to pitchingcardratings endpoint (#136)
Closes #136

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 04:02:46 -05:00
Cal Corum
dc88b1539c feat: add limit/pagination to battingcardratings endpoint (#135)
Closes #135

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 03:03:13 -05:00
Cal Corum
8c9aa55609 feat: add limit/pagination to pitstats endpoint (#134)
Closes #134

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 02:32:10 -05:00
Cal Corum
e328ad639a feat: add limit/pagination to awards endpoint (#132)
Add optional limit query param (default 100, max 500) to GET /api/v2/awards.
Clamped via max(0, min(limit, 500)) to guard negative values and upper bound.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 02:01:50 -05:00
Cal Corum
0f884a3516 feat: add limit/pagination to events endpoint (#147)
Closes #147

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 01:32:19 -05:00
Cal Corum
6034b4f173 feat: add limit/pagination to batstats endpoint (#133)
Closes #133

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 01:03:12 -05:00
Cal Corum
f9817b3d04 feat: add limit/pagination to scout_opportunities endpoint (#148)
Closes #148

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 00:32:27 -05:00
cal
6a217f97ee Merge pull request 'ci: database CI catchup — local buildx cache + dev tag trigger' (#153) from ci/database-ci-catchup into main
All checks were successful
Build Docker Image / build (push) Successful in 7m53s
2026-03-24 05:17:04 +00:00
Cal Corum
d0f45d5d38 ci: switch buildx cache from registry to local volume
Replaces type=registry cache (which causes 400 errors from Docker Hub
due to stale buildx builders) with type=local backed by a named Docker
volume on the runner. Adds cache rotation step to prevent unbounded growth.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 00:15:38 -05:00
Cal Corum
c185d72f1b ci: add dev tag trigger to Docker build workflow
Allows deploying to dev environment by pushing a "dev" tag.
Dev tags build with :dev Docker tag instead of :production.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 00:15:38 -05:00
Cal Corum
890625e770 feat: add limit/pagination to rewards endpoint (#139)
Closes #139

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 00:02:08 -05:00
Cal Corum
f3aab6fb73 feat: add limit/pagination to scout_claims endpoint (#149)
Closes #149

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 23:32:01 -05:00
Cal Corum
426d559387 feat: add limit/pagination to notifications endpoint (#140)
Closes #140

Adds optional `limit` query param to `GET /api/v2/notifs` with default
100 and max 500. Limit is applied after all filters.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 23:02:18 -05:00
Cal Corum
e12dac347e ci: switch buildx cache from registry to local volume
All checks were successful
Build Docker Image / build (push) Successful in 10m39s
Replaces type=registry cache (which causes 400 errors from Docker Hub
due to stale buildx builders) with type=local backed by a named Docker
volume on the runner. Adds cache rotation step to prevent unbounded growth.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:16:00 -05:00
Cal Corum
d4d93cd95e ci: add dev tag trigger to Docker build workflow
Some checks are pending
Build Docker Image / build (push) Waiting to run
Allows deploying to dev environment by pushing a "dev" tag.
Dev tags build with :dev Docker tag instead of :production.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:05:17 -05:00
cal
6a7400484e Merge pull request 'refactor: rename Evolution system to Refractor' (#131) from refactor/evolution-to-refractor-rename into main 2026-03-23 19:23:49 +00:00
Cal Corum
dc937dcabc fix: update stale evolution comment in cards.py
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 14:23:00 -05:00
Cal Corum
500a8f3848 fix: complete remaining evolution→refractor renames from review
- Rename route /{team_id}/evolutions → /{team_id}/refractors
- Rename function initialize_card_evolution → initialize_card_refractor
- Rename index names in migration SQL
- Update all test references

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 14:17:03 -05:00
Cal Corum
b7dec3f231 refactor: rename evolution system to refractor
Complete rename of the card progression system from "Evolution" to
"Refractor" across all code, routes, models, services, seeds, and tests.

- Route prefix: /api/v2/evolution → /api/v2/refractor
- Model classes: EvolutionTrack → RefractorTrack, etc.
- 12 files renamed, 8 files content-edited
- New migration to rename DB tables
- 117 tests pass, no logic changes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 13:31:55 -05:00
cal
0b6e85fff9 Merge pull request 'feat: Card Evolution Phase 1 — full backend implementation' (#130) from card-evolution into main 2026-03-23 18:20:20 +00:00
cal
c3b616dcfa Merge branch 'main' into card-evolution 2026-03-23 18:20:07 +00:00
Cal Corum
5ea4c7c86a fix: replace datetime.utcnow() with datetime.now() in evaluator
All checks were successful
Build Docker Image / build (pull_request) Successful in 8m36s
Fixes regression from PR #118 — utcnow() was reintroduced in
evolution_evaluator.py.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 12:54:01 -05:00
Cal Corum
d15fc97afb fix: add pitcher_id null guard in _get_player_pairs
All checks were successful
Build Docker Image / build (pull_request) Successful in 8m53s
Prevents (None, team_id) tuples from being added to pitching_pairs
when a StratPlay row has no pitcher (edge case matching the existing
batter_id guard).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 12:50:16 -05:00
cal
9ca9ea4f80 Merge pull request 'ci: switch to tag-based Docker builds' (#129) from ci/tag-based-docker-builds into main 2026-03-23 17:22:07 +00:00
Cal Corum
bdc61b4e2f ci: switch to tag-based Docker builds
Replace branch/PR-triggered Docker builds with tag-only triggers.
Images are now built only when a CalVer tag is pushed
(git tag YYYY.M.BUILD && git push origin YYYY.M.BUILD).

- Remove calver, docker-tags, and gitea-tag reusable actions
- Add inline version extraction from tag ref
- Add build cache (was missing)
- Update CLAUDE.md: remove stale next-release release workflow

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 10:49:23 -05:00
cal
1b2f8a7302 Merge pull request 'fix: remove SQLite references from CLAUDE.md (#123)' (#127) from ai/paper-dynasty-database#123 into main
All checks were successful
Build Docker Image / build (push) Successful in 8m50s
2026-03-23 13:32:14 +00:00
Cal Corum
30a6e003e8 fix: remove SQLite references from CLAUDE.md (#123)
All checks were successful
Build Docker Image / build (pull_request) Successful in 8m15s
Closes #123

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 00:01:53 -05:00
57 changed files with 9495 additions and 2954 deletions

View File

@ -1,32 +1,48 @@
# Gitea Actions: Docker Build, Push, and Notify
#
# CI/CD pipeline for Paper Dynasty Database API:
# - Builds Docker images on every push/PR
# - Auto-generates CalVer version (YYYY.MM.BUILD) on main branch merges
# - Supports multi-channel releases: stable (main), rc (next-release), dev (PRs)
# - Pushes to Docker Hub and creates git tag on main
# - Triggered by pushing a CalVer tag (e.g., 2026.3.11) or "dev" tag
# - CalVer tags push with version + "production" Docker tags
# - "dev" tag pushes with "dev" Docker tag for the dev environment
# - Sends Discord notifications on success/failure
#
# To release: git tag 2026.3.11 && git push origin 2026.3.11
# To deploy dev: git tag -f dev && git push origin dev --force
name: Build Docker Image
on:
push:
branches:
- main
- next-release
pull_request:
branches:
- main
tags:
- '20*' # matches CalVer tags like 2026.3.11
- 'dev' # dev environment builds
jobs:
build:
runs-on: ubuntu-latest
container:
volumes:
- pd-buildx-cache:/opt/buildx-cache
steps:
- name: Checkout code
uses: https://github.com/actions/checkout@v4
with:
fetch-depth: 0 # Full history for tag counting
fetch-depth: 0
- name: Extract version from tag
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/}
SHA_SHORT=$(git rev-parse --short HEAD)
echo "version=$VERSION" >> $GITHUB_OUTPUT
echo "sha_short=$SHA_SHORT" >> $GITHUB_OUTPUT
echo "timestamp=$(date -u +%Y-%m-%dT%H:%M:%SZ)" >> $GITHUB_OUTPUT
if [ "$VERSION" = "dev" ]; then
echo "environment=dev" >> $GITHUB_OUTPUT
else
echo "environment=production" >> $GITHUB_OUTPUT
fi
- name: Set up Docker Buildx
uses: https://github.com/docker/setup-buildx-action@v3
@ -37,65 +53,52 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Generate CalVer version
id: calver
uses: cal/gitea-actions/calver@main
- name: Resolve Docker tags
id: tags
uses: cal/gitea-actions/docker-tags@main
with:
image: manticorum67/paper-dynasty-database
version: ${{ steps.calver.outputs.version }}
sha_short: ${{ steps.calver.outputs.sha_short }}
- name: Build and push Docker image
uses: https://github.com/docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.tags.outputs.tags }}
tags: |
manticorum67/paper-dynasty-database:${{ steps.version.outputs.version }}
manticorum67/paper-dynasty-database:${{ steps.version.outputs.environment }}
cache-from: type=local,src=/opt/buildx-cache/pd-database
cache-to: type=local,dest=/opt/buildx-cache/pd-database-new,mode=max
- name: Tag release
if: success() && github.ref == 'refs/heads/main'
uses: cal/gitea-actions/gitea-tag@main
with:
version: ${{ steps.calver.outputs.version }}
token: ${{ github.token }}
- name: Rotate cache
run: |
rm -rf /opt/buildx-cache/pd-database
mv /opt/buildx-cache/pd-database-new /opt/buildx-cache/pd-database
- name: Build Summary
run: |
echo "## Docker Build Successful" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Channel:** \`${{ steps.tags.outputs.channel }}\`" >> $GITHUB_STEP_SUMMARY
echo "**Version:** \`${{ steps.version.outputs.version }}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Image Tags:**" >> $GITHUB_STEP_SUMMARY
IFS=',' read -ra TAG_ARRAY <<< "${{ steps.tags.outputs.tags }}"
for tag in "${TAG_ARRAY[@]}"; do
echo "- \`${tag}\`" >> $GITHUB_STEP_SUMMARY
done
echo "- \`manticorum67/paper-dynasty-database:${{ steps.version.outputs.version }}\`" >> $GITHUB_STEP_SUMMARY
echo "- \`manticorum67/paper-dynasty-database:${{ steps.version.outputs.environment }}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Build Details:**" >> $GITHUB_STEP_SUMMARY
echo "- Branch: \`${{ steps.calver.outputs.branch }}\`" >> $GITHUB_STEP_SUMMARY
echo "- Commit: \`${{ github.sha }}\`" >> $GITHUB_STEP_SUMMARY
echo "- Timestamp: \`${{ steps.calver.outputs.timestamp }}\`" >> $GITHUB_STEP_SUMMARY
echo "- Commit: \`${{ steps.version.outputs.sha_short }}\`" >> $GITHUB_STEP_SUMMARY
echo "- Timestamp: \`${{ steps.version.outputs.timestamp }}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "Pull with: \`docker pull manticorum67/paper-dynasty-database:${{ steps.tags.outputs.primary_tag }}\`" >> $GITHUB_STEP_SUMMARY
echo "Pull with: \`docker pull manticorum67/paper-dynasty-database:${{ steps.version.outputs.version }}\`" >> $GITHUB_STEP_SUMMARY
- name: Discord Notification - Success
if: success() && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/next-release')
if: success()
uses: cal/gitea-actions/discord-notify@main
with:
webhook_url: ${{ secrets.DISCORD_WEBHOOK }}
title: "Paper Dynasty Database"
status: success
version: ${{ steps.calver.outputs.version }}
image_tag: ${{ steps.tags.outputs.primary_tag }}
commit_sha: ${{ steps.calver.outputs.sha_short }}
timestamp: ${{ steps.calver.outputs.timestamp }}
version: ${{ steps.version.outputs.version }}
image_tag: ${{ steps.version.outputs.version }}
commit_sha: ${{ steps.version.outputs.sha_short }}
timestamp: ${{ steps.version.outputs.timestamp }}
- name: Discord Notification - Failure
if: failure() && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/next-release')
if: failure()
uses: cal/gitea-actions/discord-notify@main
with:
webhook_url: ${{ secrets.DISCORD_WEBHOOK }}

2
.gitignore vendored
View File

@ -59,6 +59,8 @@ pyenv.cfg
pyvenv.cfg
docker-compose.override.yml
docker-compose.*.yml
.run-local.pid
.env.local
*.db
venv
.claude/

View File

@ -1,6 +1,6 @@
# Paper Dynasty Database API
FastAPI backend for baseball card game data. Peewee ORM with SQLite (WAL mode).
FastAPI backend for baseball card game data. Peewee ORM with PostgreSQL.
## Commands
@ -14,7 +14,7 @@ docker build -t paper-dynasty-db . # Build image
## Architecture
- **Routers**: Domain-based in `app/routers_v2/` (cards, players, teams, packs, stats, gauntlets, scouting)
- **ORM**: Peewee with SQLite (`storage/pd_master.db`, WAL journaling)
- **ORM**: Peewee with PostgreSQL
- **Card images**: Playwright/Chromium renders HTML templates → screenshots (see `routers_v2/players.py`)
- **Logging**: Rotating files in `logs/database/{date}.log`
@ -42,21 +42,14 @@ docker build -t paper-dynasty-db . # Build image
- **API docs**: `/api/docs` and `/api/redoc`
### Key Env Vars
`API_TOKEN`, `LOG_LEVEL`, `DATABASE_TYPE` (sqlite/postgresql), `POSTGRES_HOST`, `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD`
`API_TOKEN`, `LOG_LEVEL`, `DATABASE_TYPE`, `POSTGRES_HOST`, `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD`
### Common Issues
- 502 Bad Gateway → API container crashed; check `docker logs pd_api`
- Card image generation failures → Playwright/Chromium issue; check for missing dependencies
- SQLite locking (dev) → WAL mode should prevent, but check for long-running writes
- DB connection errors → verify `POSTGRES_HOST` points to correct container name
- **CI/CD**: Gitea Actions on PR to `main` — builds Docker image, auto-generates CalVer version (`YYYY.MM.BUILD`) on merge
### Release Workflow
1. Create feature/fix branches off `next-release` (e.g., `fix/card-pricing`)
2. When done, merge the branch into `next-release` — this is the staging branch where changes accumulate
3. When ready to release, open a PR from `next-release``main`
4. CI builds Docker image on PR; CalVer tag is created on merge
5. Deploy the new image to production
- **CI/CD**: Gitea Actions on CalVer tag push — builds Docker image and pushes to Docker Hub
- **Release**: `git tag YYYY.M.BUILD && git push origin YYYY.M.BUILD` → CI builds + pushes image + notifies Discord
## Important

View File

@ -474,6 +474,7 @@ class Card(BaseModel):
team = ForeignKeyField(Team, null=True)
pack = ForeignKeyField(Pack, null=True)
value = IntegerField(default=0)
variant = IntegerField(null=True, default=None)
def __str__(self):
if self.player:
@ -755,6 +756,7 @@ class BattingCard(BaseModel):
running = IntegerField()
offense_col = IntegerField()
hand = CharField(default="R")
image_url = CharField(null=True, max_length=500)
class Meta:
database = db
@ -824,6 +826,7 @@ class PitchingCard(BaseModel):
batting = CharField(null=True)
offense_col = IntegerField()
hand = CharField(default="R")
image_url = CharField(null=True, max_length=500)
class Meta:
database = db
@ -1210,7 +1213,7 @@ if not SKIP_TABLE_CREATION:
db.create_tables([ScoutOpportunity, ScoutClaim], safe=True)
class EvolutionTrack(BaseModel):
class RefractorTrack(BaseModel):
name = CharField(unique=True)
card_type = CharField() # 'batter', 'sp', 'rp'
formula = CharField() # e.g. "pa + tb * 2"
@ -1221,33 +1224,41 @@ class EvolutionTrack(BaseModel):
class Meta:
database = db
table_name = "evolution_track"
table_name = "refractor_track"
class EvolutionCardState(BaseModel):
class RefractorCardState(BaseModel):
player = ForeignKeyField(Player)
team = ForeignKeyField(Team)
track = ForeignKeyField(EvolutionTrack)
track = ForeignKeyField(RefractorTrack)
current_tier = IntegerField(default=0) # 0-4
current_value = FloatField(default=0.0)
fully_evolved = BooleanField(default=False)
last_evaluated_at = DateTimeField(null=True)
variant = IntegerField(null=True)
class Meta:
database = db
table_name = "evolution_card_state"
table_name = "refractor_card_state"
evolution_card_state_index = ModelIndex(
EvolutionCardState,
(EvolutionCardState.player, EvolutionCardState.team),
refractor_card_state_index = ModelIndex(
RefractorCardState,
(RefractorCardState.player, RefractorCardState.team),
unique=True,
)
EvolutionCardState.add_index(evolution_card_state_index)
RefractorCardState.add_index(refractor_card_state_index)
refractor_card_state_team_index = ModelIndex(
RefractorCardState,
(RefractorCardState.team,),
unique=False,
)
RefractorCardState.add_index(refractor_card_state_team_index)
class EvolutionTierBoost(BaseModel):
track = ForeignKeyField(EvolutionTrack)
class RefractorTierBoost(BaseModel):
track = ForeignKeyField(RefractorTrack)
tier = IntegerField() # 1-4
boost_type = CharField() # e.g. 'rating', 'stat'
boost_target = CharField() # e.g. 'contact_vl', 'power_vr'
@ -1255,23 +1266,23 @@ class EvolutionTierBoost(BaseModel):
class Meta:
database = db
table_name = "evolution_tier_boost"
table_name = "refractor_tier_boost"
evolution_tier_boost_index = ModelIndex(
EvolutionTierBoost,
refractor_tier_boost_index = ModelIndex(
RefractorTierBoost,
(
EvolutionTierBoost.track,
EvolutionTierBoost.tier,
EvolutionTierBoost.boost_type,
EvolutionTierBoost.boost_target,
RefractorTierBoost.track,
RefractorTierBoost.tier,
RefractorTierBoost.boost_type,
RefractorTierBoost.boost_target,
),
unique=True,
)
EvolutionTierBoost.add_index(evolution_tier_boost_index)
RefractorTierBoost.add_index(refractor_tier_boost_index)
class EvolutionCosmetic(BaseModel):
class RefractorCosmetic(BaseModel):
name = CharField(unique=True)
tier_required = IntegerField(default=0)
cosmetic_type = CharField() # 'frame', 'badge', 'theme'
@ -1280,12 +1291,32 @@ class EvolutionCosmetic(BaseModel):
class Meta:
database = db
table_name = "evolution_cosmetic"
table_name = "refractor_cosmetic"
class RefractorBoostAudit(BaseModel):
card_state = ForeignKeyField(RefractorCardState, on_delete="CASCADE")
tier = IntegerField() # 1-4
battingcard = ForeignKeyField(BattingCard, null=True)
pitchingcard = ForeignKeyField(PitchingCard, null=True)
variant_created = IntegerField()
boost_delta_json = TextField() # JSONB in PostgreSQL; TextField for SQLite test compat
applied_at = DateTimeField(default=datetime.now)
class Meta:
database = db
table_name = "refractor_boost_audit"
if not SKIP_TABLE_CREATION:
db.create_tables(
[EvolutionTrack, EvolutionCardState, EvolutionTierBoost, EvolutionCosmetic],
[
RefractorTrack,
RefractorCardState,
RefractorTierBoost,
RefractorCosmetic,
RefractorBoostAudit,
],
safe=True,
)

View File

@ -51,7 +51,7 @@ from .routers_v2 import ( # noqa: E402
stratplays,
scout_opportunities,
scout_claims,
evolution,
refractor,
season_stats,
)
@ -107,7 +107,7 @@ app.include_router(stratplays.router)
app.include_router(decisions.router)
app.include_router(scout_opportunities.router)
app.include_router(scout_claims.router)
app.include_router(evolution.router)
app.include_router(refractor.router)
app.include_router(season_stats.router)

View File

@ -8,16 +8,13 @@ from ..db_engine import Award, model_to_dict, DoesNotExist
from ..dependencies import oauth2_scheme, valid_token, PRIVATE_IN_SCHEMA
router = APIRouter(
prefix='/api/v2/awards',
tags=['awards']
)
router = APIRouter(prefix="/api/v2/awards", tags=["awards"])
class AwardModel(pydantic.BaseModel):
name: str
season: int
timing: str = 'In-Season'
timing: str = "In-Season"
card_id: Optional[int] = None
team_id: Optional[int] = None
image: Optional[str] = None
@ -28,15 +25,21 @@ class AwardReturnList(pydantic.BaseModel):
awards: list[AwardModel]
@router.get('')
@router.get("")
async def get_awards(
name: Optional[str] = None, season: Optional[int] = None, timing: Optional[str] = None,
card_id: Optional[int] = None, team_id: Optional[int] = None, image: Optional[str] = None,
csv: Optional[bool] = None):
name: Optional[str] = None,
season: Optional[int] = None,
timing: Optional[str] = None,
card_id: Optional[int] = None,
team_id: Optional[int] = None,
image: Optional[str] = None,
csv: Optional[bool] = None,
limit: int = 100,
):
all_awards = Award.select().order_by(Award.id)
if all_awards.count() == 0:
raise HTTPException(status_code=404, detail=f'There are no awards to filter')
raise HTTPException(status_code=404, detail="There are no awards to filter")
if name is not None:
all_awards = all_awards.where(Award.name == name)
@ -51,53 +54,74 @@ async def get_awards(
if image is not None:
all_awards = all_awards.where(Award.image == image)
limit = max(0, min(limit, 500))
total_count = all_awards.count() if not csv else 0
all_awards = all_awards.limit(limit)
if csv:
data_list = [['id', 'name', 'season', 'timing', 'card', 'team', 'image']]
data_list = [["id", "name", "season", "timing", "card", "team", "image"]]
for line in all_awards:
data_list.append([
line.id, line.name, line.season, line.timing, line.card, line.team, line.image
])
data_list.append(
[
line.id,
line.name,
line.season,
line.timing,
line.card,
line.team,
line.image,
]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_awards.count(), 'awards': []}
return_val = {"count": total_count, "awards": []}
for x in all_awards:
return_val['awards'].append(model_to_dict(x))
return_val["awards"].append(model_to_dict(x))
return return_val
@router.get('/{award_id}')
@router.get("/{award_id}")
async def get_one_award(award_id, csv: Optional[bool] = None):
try:
this_award = Award.get_by_id(award_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No award found with id {award_id}')
raise HTTPException(
status_code=404, detail=f"No award found with id {award_id}"
)
if csv:
data_list = [
['id', 'name', 'season', 'timing', 'card', 'team', 'image'],
[this_award.id, this_award.name, this_award.season, this_award.timing, this_award.card,
this_award.team, this_award.image]
["id", "name", "season", "timing", "card", "team", "image"],
[
this_award.id,
this_award.name,
this_award.season,
this_award.timing,
this_award.card,
this_award.team,
this_award.image,
],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = model_to_dict(this_award)
return return_val
@router.post('', include_in_schema=PRIVATE_IN_SCHEMA)
@router.post("", include_in_schema=PRIVATE_IN_SCHEMA)
async def post_awards(award: AwardModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post awards. This event has been logged.'
detail="You are not authorized to post awards. This event has been logged.",
)
this_award = Award(
@ -106,7 +130,7 @@ async def post_awards(award: AwardModel, token: str = Depends(oauth2_scheme)):
timing=award.season,
card_id=award.card_id,
team_id=award.team_id,
image=award.image
image=award.image,
)
saved = this_award.save()
@ -116,28 +140,30 @@ async def post_awards(award: AwardModel, token: str = Depends(oauth2_scheme)):
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that roster'
detail="Well slap my ass and call me a teapot; I could not save that roster",
)
@router.delete('/{award_id}', include_in_schema=PRIVATE_IN_SCHEMA)
@router.delete("/{award_id}", include_in_schema=PRIVATE_IN_SCHEMA)
async def delete_award(award_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete awards. This event has been logged.'
detail="You are not authorized to delete awards. This event has been logged.",
)
try:
this_award = Award.get_by_id(award_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No award found with id {award_id}')
raise HTTPException(
status_code=404, detail=f"No award found with id {award_id}"
)
count = this_award.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Award {award_id} has been deleted')
raise HTTPException(
status_code=200, detail=f"Award {award_id} has been deleted"
)
else:
raise HTTPException(status_code=500, detail=f'Award {award_id} was not deleted')
raise HTTPException(status_code=500, detail=f"Award {award_id} was not deleted")

View File

@ -6,14 +6,20 @@ import logging
import pydantic
from pandas import DataFrame
from ..db_engine import db, BattingStat, model_to_dict, fn, Card, Player, Current, DoesNotExist
from ..db_engine import (
db,
BattingStat,
model_to_dict,
fn,
Card,
Player,
Current,
DoesNotExist,
)
from ..dependencies import oauth2_scheme, valid_token, PRIVATE_IN_SCHEMA
router = APIRouter(
prefix='/api/v2/batstats',
tags=['Pre-Season 7 Batting Stats']
)
router = APIRouter(prefix="/api/v2/batstats", tags=["Pre-Season 7 Batting Stats"])
class BatStat(pydantic.BaseModel):
@ -50,7 +56,7 @@ class BatStat(pydantic.BaseModel):
csc: Optional[int] = 0
week: int
season: int
created: Optional[int] = int(datetime.timestamp(datetime.now())*1000)
created: Optional[int] = int(datetime.timestamp(datetime.now()) * 1000)
game_id: int
@ -63,10 +69,20 @@ class BatStatReturnList(pydantic.BaseModel):
stats: list[BatStat]
@router.get('', response_model=BatStatReturnList)
@router.get("", response_model=BatStatReturnList)
async def get_batstats(
card_id: int = None, player_id: int = None, team_id: int = None, vs_team_id: int = None, week: int = None,
season: int = None, week_start: int = None, week_end: int = None, created: int = None, csv: bool = None):
card_id: int = None,
player_id: int = None,
team_id: int = None,
vs_team_id: int = None,
week: int = None,
season: int = None,
week_start: int = None,
week_end: int = None,
created: int = None,
csv: bool = None,
limit: Optional[int] = 100,
):
all_stats = BattingStat.select().join(Card).join(Player).order_by(BattingStat.id)
if season is not None:
@ -98,41 +114,123 @@ async def get_batstats(
# db.close()
# raise HTTPException(status_code=404, detail=f'No batting stats found')
limit = max(0, min(limit, 500))
total_count = all_stats.count() if not csv else 0
all_stats = all_stats.limit(limit)
if csv:
data_list = [['id', 'card_id', 'player_id', 'cardset', 'team', 'vs_team', 'pos', 'pa', 'ab', 'run', 'hit', 'rbi', 'double',
'triple', 'hr', 'bb', 'so', 'hbp', 'sac', 'ibb', 'gidp', 'sb', 'cs', 'bphr', 'bpfo', 'bp1b',
'bplo', 'xch', 'xhit', 'error', 'pb', 'sbc', 'csc', 'week', 'season', 'created', 'game_id', 'roster_num']]
data_list = [
[
"id",
"card_id",
"player_id",
"cardset",
"team",
"vs_team",
"pos",
"pa",
"ab",
"run",
"hit",
"rbi",
"double",
"triple",
"hr",
"bb",
"so",
"hbp",
"sac",
"ibb",
"gidp",
"sb",
"cs",
"bphr",
"bpfo",
"bp1b",
"bplo",
"xch",
"xhit",
"error",
"pb",
"sbc",
"csc",
"week",
"season",
"created",
"game_id",
"roster_num",
]
]
for line in all_stats:
data_list.append(
[
line.id, line.card.id, line.card.player.player_id, line.card.player.cardset.name, line.team.abbrev, line.vs_team.abbrev,
line.pos, line.pa, line.ab, line.run, line.hit, line.rbi, line.double, line.triple, line.hr,
line.bb, line.so, line.hbp, line.sac, line.ibb, line.gidp, line.sb, line.cs, line.bphr, line.bpfo,
line.bp1b, line.bplo, line.xch, line.xhit, line.error, line.pb, line.sbc, line.csc, line.week,
line.season, line.created, line.game_id, line.roster_num
line.id,
line.card.id,
line.card.player.player_id,
line.card.player.cardset.name,
line.team.abbrev,
line.vs_team.abbrev,
line.pos,
line.pa,
line.ab,
line.run,
line.hit,
line.rbi,
line.double,
line.triple,
line.hr,
line.bb,
line.so,
line.hbp,
line.sac,
line.ibb,
line.gidp,
line.sb,
line.cs,
line.bphr,
line.bpfo,
line.bp1b,
line.bplo,
line.xch,
line.xhit,
line.error,
line.pb,
line.sbc,
line.csc,
line.week,
line.season,
line.created,
line.game_id,
line.roster_num,
]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_stats.count(), 'stats': []}
return_val = {"count": total_count, "stats": []}
for x in all_stats:
return_val['stats'].append(model_to_dict(x, recurse=False))
return_val["stats"].append(model_to_dict(x, recurse=False))
return return_val
@router.get('/player/{player_id}', response_model=BatStat)
@router.get("/player/{player_id}", response_model=BatStat)
async def get_player_stats(
player_id: int, team_id: int = None, vs_team_id: int = None, week_start: int = None, week_end: int = None,
csv: bool = None):
all_stats = (BattingStat
.select(fn.COUNT(BattingStat.created).alias('game_count'))
.join(Card)
.group_by(BattingStat.card)
.where(BattingStat.card.player == player_id)).scalar()
player_id: int,
team_id: int = None,
vs_team_id: int = None,
week_start: int = None,
week_end: int = None,
csv: bool = None,
):
all_stats = (
BattingStat.select(fn.COUNT(BattingStat.created).alias("game_count"))
.join(Card)
.group_by(BattingStat.card)
.where(BattingStat.card.player == player_id)
).scalar()
if team_id is not None:
all_stats = all_stats.where(BattingStat.team_id == team_id)
@ -146,37 +244,82 @@ async def get_player_stats(
if csv:
data_list = [
[
'pa', 'ab', 'run', 'hit', 'rbi', 'double', 'triple', 'hr', 'bb', 'so', 'hbp', 'sac', 'ibb', 'gidp',
'sb', 'cs', 'bphr', 'bpfo', 'bp1b', 'bplo', 'xch', 'xhit', 'error', 'pb', 'sbc', 'csc',
],[
all_stats.pa_sum, all_stats.ab_sum, all_stats.run, all_stats.hit_sum, all_stats.rbi_sum,
all_stats.double_sum, all_stats.triple_sum, all_stats.hr_sum, all_stats.bb_sum, all_stats.so_sum,
all_stats.hbp_sum, all_stats.sac, all_stats.ibb_sum, all_stats.gidp_sum, all_stats.sb_sum,
all_stats.cs_sum, all_stats.bphr_sum, all_stats.bpfo_sum, all_stats.bp1b_sum, all_stats.bplo_sum,
all_stats.xch, all_stats.xhit_sum, all_stats.error_sum, all_stats.pb_sum, all_stats.sbc_sum,
all_stats.csc_sum
]
"pa",
"ab",
"run",
"hit",
"rbi",
"double",
"triple",
"hr",
"bb",
"so",
"hbp",
"sac",
"ibb",
"gidp",
"sb",
"cs",
"bphr",
"bpfo",
"bp1b",
"bplo",
"xch",
"xhit",
"error",
"pb",
"sbc",
"csc",
],
[
all_stats.pa_sum,
all_stats.ab_sum,
all_stats.run,
all_stats.hit_sum,
all_stats.rbi_sum,
all_stats.double_sum,
all_stats.triple_sum,
all_stats.hr_sum,
all_stats.bb_sum,
all_stats.so_sum,
all_stats.hbp_sum,
all_stats.sac,
all_stats.ibb_sum,
all_stats.gidp_sum,
all_stats.sb_sum,
all_stats.cs_sum,
all_stats.bphr_sum,
all_stats.bpfo_sum,
all_stats.bp1b_sum,
all_stats.bplo_sum,
all_stats.xch,
all_stats.xhit_sum,
all_stats.error_sum,
all_stats.pb_sum,
all_stats.sbc_sum,
all_stats.csc_sum,
],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
logging.debug(f'stat pull query: {all_stats}\n')
logging.debug(f"stat pull query: {all_stats}\n")
# logging.debug(f'result 0: {all_stats[0]}\n')
for x in all_stats:
logging.debug(f'this_line: {model_to_dict(x)}')
logging.debug(f"this_line: {model_to_dict(x)}")
return_val = model_to_dict(all_stats[0])
return return_val
@router.post('', include_in_schema=PRIVATE_IN_SCHEMA)
@router.post("", include_in_schema=PRIVATE_IN_SCHEMA)
async def post_batstats(stats: BattingStatModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post stats. This event has been logged.'
detail="You are not authorized to post stats. This event has been logged.",
)
new_stats = []
@ -215,36 +358,40 @@ async def post_batstats(stats: BattingStatModel, token: str = Depends(oauth2_sch
csc=x.csc,
week=x.week,
season=x.season,
created=datetime.fromtimestamp(x.created / 1000) if x.created else datetime.now(),
game_id=x.game_id
created=datetime.fromtimestamp(x.created / 1000)
if x.created
else datetime.now(),
game_id=x.game_id,
)
new_stats.append(this_stat)
with db.atomic():
BattingStat.bulk_create(new_stats, batch_size=15)
raise HTTPException(status_code=200, detail=f'{len(new_stats)} batting lines have been added')
raise HTTPException(
status_code=200, detail=f"{len(new_stats)} batting lines have been added"
)
@router.delete('/{stat_id}', include_in_schema=PRIVATE_IN_SCHEMA)
@router.delete("/{stat_id}", include_in_schema=PRIVATE_IN_SCHEMA)
async def delete_batstat(stat_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete stats. This event has been logged.'
detail="You are not authorized to delete stats. This event has been logged.",
)
try:
this_stat = BattingStat.get_by_id(stat_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No stat found with id {stat_id}')
raise HTTPException(status_code=404, detail=f"No stat found with id {stat_id}")
count = this_stat.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Stat {stat_id} has been deleted')
raise HTTPException(status_code=200, detail=f"Stat {stat_id} has been deleted")
else:
raise HTTPException(status_code=500, detail=f'Stat {stat_id} was not deleted')
raise HTTPException(status_code=500, detail=f"Stat {stat_id} was not deleted")
# @app.get('/api/v1/plays/batting')
@ -449,4 +596,3 @@ async def delete_batstat(stat_id, token: str = Depends(oauth2_scheme)):
# }
# db.close()
# return return_stats

View File

@ -145,6 +145,7 @@ async def get_card_ratings(
vs_hand: Literal["R", "L", "vR", "vL"] = None,
short_output: bool = False,
csv: bool = False,
limit: int = 100,
):
this_team = Team.get_or_none(Team.id == team_id)
logging.debug(f"Team: {this_team} / has_guide: {this_team.has_guide}")
@ -178,6 +179,9 @@ async def get_card_ratings(
)
all_ratings = all_ratings.where(BattingCardRatings.battingcard << set_cards)
total_count = all_ratings.count() if not csv else 0
all_ratings = all_ratings.limit(max(0, min(limit, 500)))
if csv:
# return_val = query_to_csv(all_ratings)
return_vals = [model_to_dict(x) for x in all_ratings]
@ -192,7 +196,7 @@ async def get_card_ratings(
else:
return_val = {
"count": all_ratings.count(),
"count": total_count,
"ratings": [
model_to_dict(x, recurse=not short_output) for x in all_ratings
],
@ -281,7 +285,7 @@ def get_scouting_dfs(cardset_id: list = None):
)
]
),
name=f"Arm OF",
name="Arm OF",
)
)
series_list.append(
@ -292,7 +296,7 @@ def get_scouting_dfs(cardset_id: list = None):
for x in positions.where(CardPosition.position == "C")
]
),
name=f"Arm C",
name="Arm C",
)
)
series_list.append(
@ -303,7 +307,7 @@ def get_scouting_dfs(cardset_id: list = None):
for x in positions.where(CardPosition.position == "C")
]
),
name=f"PB C",
name="PB C",
)
)
series_list.append(
@ -314,7 +318,7 @@ def get_scouting_dfs(cardset_id: list = None):
for x in positions.where(CardPosition.position == "C")
]
),
name=f"Throw C",
name="Throw C",
)
)
logging.debug(f"series_list: {series_list}")
@ -334,9 +338,9 @@ async def get_card_scouting(team_id: int, ts: str):
"https://ko-fi.com/manticorum/shop"
)
if os.path.isfile(f"storage/batting-ratings.csv"):
if os.path.isfile("storage/batting-ratings.csv"):
return FileResponse(
path=f"storage/batting-ratings.csv",
path="storage/batting-ratings.csv",
media_type="text/csv",
# headers=headers
)
@ -354,7 +358,7 @@ async def post_calc_scouting(token: str = Depends(oauth2_scheme)):
status_code=401, detail="You are not authorized to calculate card ratings."
)
logging.warning(f"Re-calculating batting ratings\n\n")
logging.warning("Re-calculating batting ratings\n\n")
output = get_scouting_dfs()
first = ["player_id", "player_name", "cardset_name", "rarity", "hand", "variant"]
@ -370,9 +374,9 @@ async def post_calc_scouting(token: str = Depends(oauth2_scheme)):
@router.get("/basic")
async def get_basic_scouting(cardset_id: list = Query(default=None)):
if os.path.isfile(f"storage/batting-basic.csv"):
if os.path.isfile("storage/batting-basic.csv"):
return FileResponse(
path=f"storage/batting-basic.csv",
path="storage/batting-basic.csv",
media_type="text/csv",
# headers=headers
)
@ -390,7 +394,7 @@ async def post_calc_basic(token: str = Depends(oauth2_scheme)):
status_code=401, detail="You are not authorized to calculate basic ratings."
)
logging.warning(f"Re-calculating basic batting ratings\n\n")
logging.warning("Re-calculating basic batting ratings\n\n")
raw_data = get_scouting_dfs()
logging.debug(f"output: {raw_data}")
@ -667,9 +671,11 @@ async def get_player_ratings(
if variant is not None:
all_cards = all_cards.where(BattingCard.variant << variant)
all_ratings = BattingCardRatings.select().where(
BattingCardRatings.battingcard << all_cards
).order_by(BattingCardRatings.id)
all_ratings = (
BattingCardRatings.select()
.where(BattingCardRatings.battingcard << all_cards)
.order_by(BattingCardRatings.id)
)
return_val = {
"count": all_ratings.count(),

View File

@ -51,6 +51,7 @@ async def get_card_positions(
cardset_id: list = Query(default=None),
short_output: Optional[bool] = False,
sort: Optional[str] = "innings-desc",
limit: int = 100,
):
all_pos = (
CardPosition.select()
@ -86,6 +87,9 @@ async def get_card_positions(
elif sort == "range-asc":
all_pos = all_pos.order_by(CardPosition.range, CardPosition.id)
limit = max(0, min(limit, 500))
all_pos = all_pos.limit(limit)
return_val = {
"count": all_pos.count(),
"positions": [model_to_dict(x, recurse=not short_output) for x in all_pos],

View File

@ -4,9 +4,19 @@ import logging
import pydantic
from pandas import DataFrame
from ..db_engine import db, Card, model_to_dict, Team, Player, Pack, Paperdex, CARDSETS, DoesNotExist
from ..db_engine import (
db,
Card,
model_to_dict,
Team,
Player,
Pack,
Paperdex,
CARDSETS,
DoesNotExist,
)
from ..dependencies import oauth2_scheme, valid_token
from ..services.evolution_init import _determine_card_type, initialize_card_evolution
from ..services.refractor_init import _determine_card_type, initialize_card_refractor
router = APIRouter(prefix="/api/v2/cards", tags=["cards"])
@ -47,19 +57,25 @@ async def get_cards(
try:
this_team = Team.get_by_id(team_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No team found with id {team_id}')
raise HTTPException(
status_code=404, detail=f"No team found with id {team_id}"
)
all_cards = all_cards.where(Card.team == this_team)
if player_id is not None:
try:
this_player = Player.get_by_id(player_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No player found with id {player_id}')
raise HTTPException(
status_code=404, detail=f"No player found with id {player_id}"
)
all_cards = all_cards.where(Card.player == this_player)
if pack_id is not None:
try:
this_pack = Pack.get_by_id(pack_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No pack found with id {pack_id}')
raise HTTPException(
status_code=404, detail=f"No pack found with id {pack_id}"
)
all_cards = all_cards.where(Card.pack == this_pack)
if value is not None:
all_cards = all_cards.where(Card.value == value)
@ -125,7 +141,6 @@ async def get_cards(
dex_by_player.setdefault(row.player_id, []).append(row)
return_val = {"count": len(card_list), "cards": []}
for x in card_list:
this_record = model_to_dict(x)
logging.debug(f"this_record: {this_record}")
@ -147,7 +162,7 @@ async def v1_cards_get_one(card_id, csv: Optional[bool] = False):
try:
this_card = Card.get_by_id(card_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No card found with id {card_id}')
raise HTTPException(status_code=404, detail=f"No card found with id {card_id}")
if csv:
data_list = [
@ -207,15 +222,15 @@ async def v1_cards_post(cards: CardModel, token: str = Depends(oauth2_scheme)):
cost_query.execute()
# sheets.post_new_cards(SHEETS_AUTH, lc_id)
# WP-10: initialize evolution state for each new card (fire-and-forget)
# WP-10: initialize refractor state for each new card (fire-and-forget)
for x in cards.cards:
try:
this_player = Player.get_by_id(x.player_id)
card_type = _determine_card_type(this_player)
initialize_card_evolution(x.player_id, x.team_id, card_type)
initialize_card_refractor(x.player_id, x.team_id, card_type)
except Exception:
logging.exception(
"evolution hook: unexpected error for player_id=%s team_id=%s",
"refractor hook: unexpected error for player_id=%s team_id=%s",
x.player_id,
x.team_id,
)
@ -319,8 +334,8 @@ async def v1_cards_wipe_team(team_id: int, token: str = Depends(oauth2_scheme)):
try:
this_team = Team.get_by_id(team_id)
except DoesNotExist:
logging.error(f'/cards/wipe-team/{team_id} - could not find team')
raise HTTPException(status_code=404, detail=f'Team {team_id} not found')
logging.error(f"/cards/wipe-team/{team_id} - could not find team")
raise HTTPException(status_code=404, detail=f"Team {team_id} not found")
t_query = Card.update(team=None).where(Card.team == this_team).execute()
return f"Wiped {t_query} cards"
@ -348,7 +363,7 @@ async def v1_cards_patch(
try:
this_card = Card.get_by_id(card_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No card found with id {card_id}')
raise HTTPException(status_code=404, detail=f"No card found with id {card_id}")
if player_id is not None:
this_card.player_id = player_id
@ -391,7 +406,7 @@ async def v1_cards_delete(card_id, token: str = Depends(oauth2_scheme)):
try:
this_card = Card.get_by_id(card_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No cards found with id {card_id}')
raise HTTPException(status_code=404, detail=f"No cards found with id {card_id}")
count = this_card.delete_instance()

View File

@ -8,10 +8,7 @@ from ..db_engine import Event, model_to_dict, fn, DoesNotExist
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/events',
tags=['events']
)
router = APIRouter(prefix="/api/v2/events", tags=["events"])
class EventModel(pydantic.BaseModel):
@ -23,76 +20,102 @@ class EventModel(pydantic.BaseModel):
active: Optional[bool] = False
@router.get('')
@router.get("")
async def v1_events_get(
name: Optional[str] = None, in_desc: Optional[str] = None, active: Optional[bool] = None,
csv: Optional[bool] = None):
name: Optional[str] = None,
in_desc: Optional[str] = None,
active: Optional[bool] = None,
csv: Optional[bool] = None,
limit: Optional[int] = 100,
):
all_events = Event.select().order_by(Event.id)
if name is not None:
all_events = all_events.where(fn.Lower(Event.name) == name.lower())
if in_desc is not None:
all_events = all_events.where(
(fn.Lower(Event.short_desc).contains(in_desc.lower())) |
(fn.Lower(Event.long_desc).contains(in_desc.lower()))
(fn.Lower(Event.short_desc).contains(in_desc.lower()))
| (fn.Lower(Event.long_desc).contains(in_desc.lower()))
)
if active is not None:
all_events = all_events.where(Event.active == active)
total_count = all_events.count() if not csv else 0
all_events = all_events.limit(max(0, min(limit, 500)))
if csv:
data_list = [['id', 'name', 'short_desc', 'long_desc', 'url', 'thumbnail', 'active']]
data_list = [
["id", "name", "short_desc", "long_desc", "url", "thumbnail", "active"]
]
for line in all_events:
data_list.append(
[
line.id, line.name, line.short_desc, line.long_desc, line.url, line.thumbnail, line.active
line.id,
line.name,
line.short_desc,
line.long_desc,
line.url,
line.thumbnail,
line.active,
]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_events.count(), 'events': []}
return_val = {"count": total_count, "events": []}
for x in all_events:
return_val['events'].append(model_to_dict(x))
return_val["events"].append(model_to_dict(x))
return return_val
@router.get('/{event_id}')
@router.get("/{event_id}")
async def v1_events_get_one(event_id, csv: Optional[bool] = False):
try:
this_event = Event.get_by_id(event_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No event found with id {event_id}')
raise HTTPException(
status_code=404, detail=f"No event found with id {event_id}"
)
if csv:
data_list = [
['id', 'name', 'short_desc', 'long_desc', 'url', 'thumbnail', 'active'],
[this_event.id, this_event.name, this_event.short_desc, this_event.long_desc, this_event.url,
this_event.thumbnail, this_event.active]
["id", "name", "short_desc", "long_desc", "url", "thumbnail", "active"],
[
this_event.id,
this_event.name,
this_event.short_desc,
this_event.long_desc,
this_event.url,
this_event.thumbnail,
this_event.active,
],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = model_to_dict(this_event)
return return_val
@router.post('')
@router.post("")
async def v1_events_post(event: EventModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post events. This event has been logged.'
detail="You are not authorized to post events. This event has been logged.",
)
dupe_event = Event.get_or_none(Event.name == event.name)
if dupe_event:
raise HTTPException(status_code=400, detail=f'There is already an event using {event.name}')
raise HTTPException(
status_code=400, detail=f"There is already an event using {event.name}"
)
this_event = Event(
name=event.name,
@ -100,7 +123,7 @@ async def v1_events_post(event: EventModel, token: str = Depends(oauth2_scheme))
long_desc=event.long_desc,
url=event.url,
thumbnail=event.thumbnail,
active=event.active
active=event.active,
)
saved = this_event.save()
@ -110,25 +133,33 @@ async def v1_events_post(event: EventModel, token: str = Depends(oauth2_scheme))
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that cardset'
detail="Well slap my ass and call me a teapot; I could not save that cardset",
)
@router.patch('/{event_id}')
@router.patch("/{event_id}")
async def v1_events_patch(
event_id, name: Optional[str] = None, short_desc: Optional[str] = None, long_desc: Optional[str] = None,
url: Optional[str] = None, thumbnail: Optional[str] = None, active: Optional[bool] = None,
token: str = Depends(oauth2_scheme)):
event_id,
name: Optional[str] = None,
short_desc: Optional[str] = None,
long_desc: Optional[str] = None,
url: Optional[str] = None,
thumbnail: Optional[str] = None,
active: Optional[bool] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to patch events. This event has been logged.'
detail="You are not authorized to patch events. This event has been logged.",
)
try:
this_event = Event.get_by_id(event_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No event found with id {event_id}')
raise HTTPException(
status_code=404, detail=f"No event found with id {event_id}"
)
if name is not None:
this_event.name = name
@ -149,26 +180,30 @@ async def v1_events_patch(
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that event'
detail="Well slap my ass and call me a teapot; I could not save that event",
)
@router.delete('/{event_id}')
@router.delete("/{event_id}")
async def v1_events_delete(event_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete events. This event has been logged.'
detail="You are not authorized to delete events. This event has been logged.",
)
try:
this_event = Event.get_by_id(event_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No event found with id {event_id}')
raise HTTPException(
status_code=404, detail=f"No event found with id {event_id}"
)
count = this_event.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Event {event_id} has been deleted')
raise HTTPException(
status_code=200, detail=f"Event {event_id} has been deleted"
)
else:
raise HTTPException(status_code=500, detail=f'Event {event_id} was not deleted')
raise HTTPException(status_code=500, detail=f"Event {event_id} was not deleted")

View File

@ -1,231 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException, Query
import logging
from typing import Optional
from ..db_engine import model_to_dict
from ..dependencies import oauth2_scheme, valid_token
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/v2/evolution", tags=["evolution"])
# Tier -> threshold attribute name. Index = current_tier; value is the
# attribute on EvolutionTrack whose value is the *next* threshold to reach.
# Tier 4 is fully evolved so there is no next threshold (None sentinel).
_NEXT_THRESHOLD_ATTR = {
0: "t1_threshold",
1: "t2_threshold",
2: "t3_threshold",
3: "t4_threshold",
4: None,
}
def _build_card_state_response(state) -> dict:
"""Serialise an EvolutionCardState into the standard API response shape.
Produces a flat dict with player_id and team_id as plain integers,
a nested 'track' dict with all threshold fields, and a computed
'next_threshold' field:
- For tiers 0-3: the threshold value for the tier immediately above.
- For tier 4 (fully evolved): None.
Uses model_to_dict(recurse=False) internally so FK fields are returned
as IDs rather than nested objects, then promotes the needed IDs up to
the top level.
"""
track = state.track
track_dict = model_to_dict(track, recurse=False)
next_attr = _NEXT_THRESHOLD_ATTR.get(state.current_tier)
next_threshold = getattr(track, next_attr) if next_attr else None
return {
"player_id": state.player_id,
"team_id": state.team_id,
"current_tier": state.current_tier,
"current_value": state.current_value,
"fully_evolved": state.fully_evolved,
"last_evaluated_at": (
state.last_evaluated_at.isoformat() if state.last_evaluated_at else None
),
"track": track_dict,
"next_threshold": next_threshold,
}
@router.get("/tracks")
async def list_tracks(
card_type: Optional[str] = Query(default=None),
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import EvolutionTrack
query = EvolutionTrack.select()
if card_type is not None:
query = query.where(EvolutionTrack.card_type == card_type)
items = [model_to_dict(t, recurse=False) for t in query]
return {"count": len(items), "items": items}
@router.get("/tracks/{track_id}")
async def get_track(track_id: int, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import EvolutionTrack
try:
track = EvolutionTrack.get_by_id(track_id)
except Exception:
raise HTTPException(status_code=404, detail=f"Track {track_id} not found")
return model_to_dict(track, recurse=False)
@router.get("/cards/{card_id}")
async def get_card_state(card_id: int, token: str = Depends(oauth2_scheme)):
"""Return the EvolutionCardState for a card identified by its Card.id.
Resolves card_id -> (player_id, team_id) via the Card table, then looks
up the matching EvolutionCardState row. Because duplicate cards for the
same player+team share one state row (unique-(player,team) constraint),
any card_id belonging to that player on that team returns the same state.
Returns 404 when:
- The card_id does not exist in the Card table.
- The card exists but has no corresponding EvolutionCardState yet.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import Card, EvolutionCardState, EvolutionTrack, DoesNotExist
# Resolve card_id to player+team
try:
card = Card.get_by_id(card_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f"Card {card_id} not found")
# Look up the evolution state for this (player, team) pair, joining the
# track so a single query resolves both rows.
try:
state = (
EvolutionCardState.select(EvolutionCardState, EvolutionTrack)
.join(EvolutionTrack)
.where(
(EvolutionCardState.player == card.player_id)
& (EvolutionCardState.team == card.team_id)
)
.get()
)
except DoesNotExist:
raise HTTPException(
status_code=404,
detail=f"No evolution state for card {card_id}",
)
return _build_card_state_response(state)
@router.post("/cards/{card_id}/evaluate")
async def evaluate_card(card_id: int, token: str = Depends(oauth2_scheme)):
"""Force-recalculate evolution state for a card from career stats.
Resolves card_id to (player_id, team_id), then recomputes the evolution
tier from all player_season_stats rows for that pair. Idempotent.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import Card
from ..services.evolution_evaluator import evaluate_card as _evaluate
try:
card = Card.get_by_id(card_id)
except Exception:
raise HTTPException(status_code=404, detail=f"Card {card_id} not found")
try:
result = _evaluate(card.player_id, card.team_id)
except ValueError as exc:
raise HTTPException(status_code=404, detail=str(exc))
return result
@router.post("/evaluate-game/{game_id}")
async def evaluate_game(game_id: int, token: str = Depends(oauth2_scheme)):
"""Evaluate evolution state for all players who appeared in a game.
Finds all unique (player_id, team_id) pairs from the game's StratPlay rows,
then for each pair that has an EvolutionCardState, re-computes the evolution
tier. Pairs without a state row are silently skipped. Per-player errors are
logged but do not abort the batch.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import EvolutionCardState, EvolutionTrack, Player, StratPlay
from ..services.evolution_evaluator import evaluate_card
plays = list(StratPlay.select().where(StratPlay.game == game_id))
pairs: set[tuple[int, int]] = set()
for play in plays:
if play.batter_id is not None:
pairs.add((play.batter_id, play.batter_team_id))
if play.pitcher_id is not None:
pairs.add((play.pitcher_id, play.pitcher_team_id))
evaluated = 0
tier_ups = []
for player_id, team_id in pairs:
try:
state = EvolutionCardState.get_or_none(
(EvolutionCardState.player_id == player_id)
& (EvolutionCardState.team_id == team_id)
)
if state is None:
continue
old_tier = state.current_tier
result = evaluate_card(player_id, team_id)
evaluated += 1
new_tier = result.get("current_tier", old_tier)
if new_tier > old_tier:
player_name = "Unknown"
try:
p = Player.get_by_id(player_id)
player_name = p.p_name
except Exception:
pass
tier_ups.append(
{
"player_id": player_id,
"team_id": team_id,
"player_name": player_name,
"old_tier": old_tier,
"new_tier": new_tier,
"current_value": result.get("current_value", 0),
"track_name": state.track.name if state.track else "Unknown",
}
)
except Exception as exc:
logger.warning(
f"Evolution eval failed for player={player_id} team={team_id}: {exc}"
)
return {"evaluated": evaluated, "tier_ups": tier_ups}

View File

@ -8,10 +8,7 @@ from ..db_engine import GameRewards, model_to_dict, DoesNotExist
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/gamerewards',
tags=['gamerewards']
)
router = APIRouter(prefix="/api/v2/gamerewards", tags=["gamerewards"])
class GameRewardModel(pydantic.BaseModel):
@ -21,10 +18,15 @@ class GameRewardModel(pydantic.BaseModel):
money: Optional[int] = None
@router.get('')
@router.get("")
async def v1_gamerewards_get(
name: Optional[str] = None, pack_type_id: Optional[int] = None, player_id: Optional[int] = None,
money: Optional[int] = None, csv: Optional[bool] = None):
name: Optional[str] = None,
pack_type_id: Optional[int] = None,
player_id: Optional[int] = None,
money: Optional[int] = None,
csv: Optional[bool] = None,
limit: int = 100,
):
all_rewards = GameRewards.select().order_by(GameRewards.id)
# if all_rewards.count() == 0:
@ -40,61 +42,77 @@ async def v1_gamerewards_get(
if money is not None:
all_rewards = all_rewards.where(GameRewards.money == money)
limit = max(0, min(limit, 500))
total_count = all_rewards.count() if not csv else 0
all_rewards = all_rewards.limit(limit)
if csv:
data_list = [['id', 'pack_type_id', 'player_id', 'money']]
data_list = [["id", "pack_type_id", "player_id", "money"]]
for line in all_rewards:
data_list.append([
line.id, line.pack_type_id if line.pack_type else None, line.player_id if line.player else None,
line.money
])
data_list.append(
[
line.id,
line.pack_type_id if line.pack_type else None,
line.player_id if line.player else None,
line.money,
]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_rewards.count(), 'gamerewards': []}
return_val = {"count": total_count, "gamerewards": []}
for x in all_rewards:
return_val['gamerewards'].append(model_to_dict(x))
return_val["gamerewards"].append(model_to_dict(x))
return return_val
@router.get('/{gameaward_id}')
@router.get("/{gameaward_id}")
async def v1_gamerewards_get_one(gamereward_id, csv: Optional[bool] = None):
try:
this_game_reward = GameRewards.get_by_id(gamereward_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No game reward found with id {gamereward_id}')
raise HTTPException(
status_code=404, detail=f"No game reward found with id {gamereward_id}"
)
if csv:
data_list = [
['id', 'pack_type_id', 'player_id', 'money'],
[this_game_reward.id, this_game_reward.pack_type_id if this_game_reward.pack_type else None,
this_game_reward.player_id if this_game_reward.player else None, this_game_reward.money]
["id", "pack_type_id", "player_id", "money"],
[
this_game_reward.id,
this_game_reward.pack_type_id if this_game_reward.pack_type else None,
this_game_reward.player_id if this_game_reward.player else None,
this_game_reward.money,
],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = model_to_dict(this_game_reward)
return return_val
@router.post('')
async def v1_gamerewards_post(game_reward: GameRewardModel, token: str = Depends(oauth2_scheme)):
@router.post("")
async def v1_gamerewards_post(
game_reward: GameRewardModel, token: str = Depends(oauth2_scheme)
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post game rewards. This event has been logged.'
detail="You are not authorized to post game rewards. This event has been logged.",
)
this_award = GameRewards(
name=game_reward.name,
pack_type_id=game_reward.pack_type_id,
player_id=game_reward.player_id,
money=game_reward.money
money=game_reward.money,
)
saved = this_award.save()
@ -104,24 +122,31 @@ async def v1_gamerewards_post(game_reward: GameRewardModel, token: str = Depends
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that roster'
detail="Well slap my ass and call me a teapot; I could not save that roster",
)
@router.patch('/{game_reward_id}')
@router.patch("/{game_reward_id}")
async def v1_gamerewards_patch(
game_reward_id: int, name: Optional[str] = None, pack_type_id: Optional[int] = None,
player_id: Optional[int] = None, money: Optional[int] = None, token: str = Depends(oauth2_scheme)):
game_reward_id: int,
name: Optional[str] = None,
pack_type_id: Optional[int] = None,
player_id: Optional[int] = None,
money: Optional[int] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to patch gamerewards. This event has been logged.'
detail="You are not authorized to patch gamerewards. This event has been logged.",
)
try:
this_game_reward = GameRewards.get_by_id(game_reward_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No game reward found with id {game_reward_id}')
raise HTTPException(
status_code=404, detail=f"No game reward found with id {game_reward_id}"
)
if name is not None:
this_game_reward.name = name
@ -147,27 +172,32 @@ async def v1_gamerewards_patch(
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that rarity'
detail="Well slap my ass and call me a teapot; I could not save that rarity",
)
@router.delete('/{gamereward_id}')
@router.delete("/{gamereward_id}")
async def v1_gamerewards_delete(gamereward_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete awards. This event has been logged.'
detail="You are not authorized to delete awards. This event has been logged.",
)
try:
this_award = GameRewards.get_by_id(gamereward_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No award found with id {gamereward_id}')
raise HTTPException(
status_code=404, detail=f"No award found with id {gamereward_id}"
)
count = this_award.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Game Reward {gamereward_id} has been deleted')
raise HTTPException(
status_code=200, detail=f"Game Reward {gamereward_id} has been deleted"
)
else:
raise HTTPException(status_code=500, detail=f'Game Reward {gamereward_id} was not deleted')
raise HTTPException(
status_code=500, detail=f"Game Reward {gamereward_id} was not deleted"
)

View File

@ -30,6 +30,7 @@ async def v1_gauntletreward_get(
reward_id: list = Query(default=None),
win_num: Optional[int] = None,
loss_max: Optional[int] = None,
limit: int = 100,
):
all_rewards = GauntletReward.select().order_by(GauntletReward.id)
@ -46,7 +47,11 @@ async def v1_gauntletreward_get(
all_rewards = all_rewards.order_by(-GauntletReward.loss_max, GauntletReward.win_num)
return_val = {"count": all_rewards.count(), "rewards": []}
limit = max(0, min(limit, 500))
total_count = all_rewards.count()
all_rewards = all_rewards.limit(limit)
return_val = {"count": total_count, "rewards": []}
for x in all_rewards:
return_val["rewards"].append(model_to_dict(x))

View File

@ -8,10 +8,7 @@ from ..db_engine import GauntletRun, model_to_dict, DatabaseError, DoesNotExist
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/gauntletruns',
tags=['notifs']
)
router = APIRouter(prefix="/api/v2/gauntletruns", tags=["notifs"])
class GauntletRunModel(pydantic.BaseModel):
@ -24,13 +21,25 @@ class GauntletRunModel(pydantic.BaseModel):
ended: Optional[int] = None
@router.get('')
@router.get("")
async def get_gauntletruns(
team_id: list = Query(default=None), wins: Optional[int] = None, wins_min: Optional[int] = None,
wins_max: Optional[int] = None, losses: Optional[int] = None, losses_min: Optional[int] = None,
losses_max: Optional[int] = None, gsheet: Optional[str] = None, created_after: Optional[int] = None,
created_before: Optional[int] = None, ended_after: Optional[int] = None, ended_before: Optional[int] = None,
is_active: Optional[bool] = None, gauntlet_id: list = Query(default=None), season: list = Query(default=None)):
team_id: list = Query(default=None),
wins: Optional[int] = None,
wins_min: Optional[int] = None,
wins_max: Optional[int] = None,
losses: Optional[int] = None,
losses_min: Optional[int] = None,
losses_max: Optional[int] = None,
gsheet: Optional[str] = None,
created_after: Optional[int] = None,
created_before: Optional[int] = None,
ended_after: Optional[int] = None,
ended_before: Optional[int] = None,
is_active: Optional[bool] = None,
gauntlet_id: list = Query(default=None),
season: list = Query(default=None),
limit: int = 100,
):
all_gauntlets = GauntletRun.select().order_by(GauntletRun.id)
if team_id is not None:
@ -73,39 +82,48 @@ async def get_gauntletruns(
if season is not None:
all_gauntlets = all_gauntlets.where(GauntletRun.team.season << season)
return_val = {'count': all_gauntlets.count(), 'runs': []}
for x in all_gauntlets:
return_val['runs'].append(model_to_dict(x))
limit = max(0, min(limit, 500))
return_val = {"count": all_gauntlets.count(), "runs": []}
for x in all_gauntlets.limit(limit):
return_val["runs"].append(model_to_dict(x))
return return_val
@router.get('/{gauntletrun_id}')
@router.get("/{gauntletrun_id}")
async def get_one_gauntletrun(gauntletrun_id):
try:
this_gauntlet = GauntletRun.get_by_id(gauntletrun_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No gauntlet found with id {gauntletrun_id}')
raise HTTPException(
status_code=404, detail=f"No gauntlet found with id {gauntletrun_id}"
)
return_val = model_to_dict(this_gauntlet)
return return_val
@router.patch('/{gauntletrun_id}')
@router.patch("/{gauntletrun_id}")
async def patch_gauntletrun(
gauntletrun_id, team_id: Optional[int] = None, wins: Optional[int] = None, losses: Optional[int] = None,
gsheet: Optional[str] = None, created: Optional[bool] = None, ended: Optional[bool] = None,
token: str = Depends(oauth2_scheme)):
gauntletrun_id,
team_id: Optional[int] = None,
wins: Optional[int] = None,
losses: Optional[int] = None,
gsheet: Optional[str] = None,
created: Optional[bool] = None,
ended: Optional[bool] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to patch gauntlet runs. This event has been logged.'
detail="You are not authorized to patch gauntlet runs. This event has been logged.",
)
this_run = GauntletRun.get_or_none(GauntletRun.id == gauntletrun_id)
if this_run is None:
raise KeyError(f'Gauntlet Run ID {gauntletrun_id} not found')
raise KeyError(f"Gauntlet Run ID {gauntletrun_id} not found")
if team_id is not None:
this_run.team_id = team_id
@ -130,41 +148,42 @@ async def patch_gauntletrun(
r_curr = model_to_dict(this_run)
return r_curr
else:
raise DatabaseError(f'Unable to patch gauntlet run {gauntletrun_id}')
raise DatabaseError(f"Unable to patch gauntlet run {gauntletrun_id}")
@router.post('')
async def post_gauntletrun(gauntletrun: GauntletRunModel, token: str = Depends(oauth2_scheme)):
@router.post("")
async def post_gauntletrun(
gauntletrun: GauntletRunModel, token: str = Depends(oauth2_scheme)
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post gauntlets. This event has been logged.'
detail="You are not authorized to post gauntlets. This event has been logged.",
)
run_data = gauntletrun.dict()
# Convert milliseconds timestamps to datetime for PostgreSQL
if run_data.get('created'):
run_data['created'] = datetime.fromtimestamp(run_data['created'] / 1000)
if run_data.get("created"):
run_data["created"] = datetime.fromtimestamp(run_data["created"] / 1000)
else:
run_data['created'] = datetime.now()
if run_data.get('ended'):
run_data['ended'] = datetime.fromtimestamp(run_data['ended'] / 1000)
run_data["created"] = datetime.now()
if run_data.get("ended"):
run_data["ended"] = datetime.fromtimestamp(run_data["ended"] / 1000)
else:
run_data['ended'] = None
run_data["ended"] = None
this_run = GauntletRun(**run_data)
if this_run.save():
r_run = model_to_dict(this_run)
return r_run
else:
raise DatabaseError(f'Unable to post gauntlet run')
raise DatabaseError("Unable to post gauntlet run")
@router.delete('/{gauntletrun_id}')
@router.delete("/{gauntletrun_id}")
async def delete_gauntletrun(gauntletrun_id):
if GauntletRun.delete_by_id(gauntletrun_id) == 1:
return f'Deleted gauntlet run ID {gauntletrun_id}'
raise DatabaseError(f'Unable to delete gauntlet run {gauntletrun_id}')
return f"Deleted gauntlet run ID {gauntletrun_id}"
raise DatabaseError(f"Unable to delete gauntlet run {gauntletrun_id}")

View File

@ -73,6 +73,7 @@ async def get_players(
key_mlbam: list = Query(default=None),
offense_col: list = Query(default=None),
csv: Optional[bool] = False,
limit: int = 100,
):
all_players = MlbPlayer.select().order_by(MlbPlayer.id)
@ -101,12 +102,15 @@ async def get_players(
if offense_col is not None:
all_players = all_players.where(MlbPlayer.offense_col << offense_col)
total_count = all_players.count() if not csv else 0
all_players = all_players.limit(max(0, min(limit, 500)))
if csv:
return_val = query_to_csv(all_players)
return Response(content=return_val, media_type="text/csv")
return_val = {
"count": all_players.count(),
"count": total_count,
"players": [model_to_dict(x) for x in all_players],
}
return return_val
@ -222,7 +226,7 @@ async def post_one_player(player: PlayerModel, token: str = Depends(oauth2_schem
| (MlbPlayer.key_bbref == player.key_bbref)
)
if dupes.count() > 0:
logging.info(f"POST /mlbplayers/one - dupes found:")
logging.info("POST /mlbplayers/one - dupes found:")
for x in dupes:
logging.info(f"{x}")
raise HTTPException(

View File

@ -9,10 +9,7 @@ from ..db_engine import Notification, model_to_dict, fn, DoesNotExist
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/notifs',
tags=['notifs']
)
router = APIRouter(prefix="/api/v2/notifs", tags=["notifs"])
class NotifModel(pydantic.BaseModel):
@ -21,19 +18,30 @@ class NotifModel(pydantic.BaseModel):
desc: Optional[str] = None
field_name: str
message: str
about: Optional[str] = 'blank'
about: Optional[str] = "blank"
ack: Optional[bool] = False
@router.get('')
@router.get("")
async def get_notifs(
created_after: Optional[int] = None, title: Optional[str] = None, desc: Optional[str] = None,
field_name: Optional[str] = None, in_desc: Optional[str] = None, about: Optional[str] = None,
ack: Optional[bool] = None, csv: Optional[bool] = None):
created_after: Optional[int] = None,
title: Optional[str] = None,
desc: Optional[str] = None,
field_name: Optional[str] = None,
in_desc: Optional[str] = None,
about: Optional[str] = None,
ack: Optional[bool] = None,
csv: Optional[bool] = None,
limit: Optional[int] = 100,
):
if limit is not None:
limit = max(0, min(limit, 500))
all_notif = Notification.select().order_by(Notification.id)
if all_notif.count() == 0:
raise HTTPException(status_code=404, detail=f'There are no notifications to filter')
raise HTTPException(
status_code=404, detail="There are no notifications to filter"
)
if created_after is not None:
# Convert milliseconds timestamp to datetime for PostgreSQL comparison
@ -46,62 +54,90 @@ async def get_notifs(
if field_name is not None:
all_notif = all_notif.where(Notification.field_name == field_name)
if in_desc is not None:
all_notif = all_notif.where(fn.Lower(Notification.desc).contains(in_desc.lower()))
all_notif = all_notif.where(
fn.Lower(Notification.desc).contains(in_desc.lower())
)
if about is not None:
all_notif = all_notif.where(Notification.about == about)
if ack is not None:
all_notif = all_notif.where(Notification.ack == ack)
total_count = all_notif.count()
if limit is not None:
all_notif = all_notif.limit(limit)
if csv:
data_list = [['id', 'created', 'title', 'desc', 'field_name', 'message', 'about', 'ack']]
data_list = [
["id", "created", "title", "desc", "field_name", "message", "about", "ack"]
]
for line in all_notif:
data_list.append([
line.id, line.created, line.title, line.desc, line.field_name, line.message, line.about, line.ack
])
data_list.append(
[
line.id,
line.created,
line.title,
line.desc,
line.field_name,
line.message,
line.about,
line.ack,
]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_notif.count(), 'notifs': []}
return_val = {"count": total_count, "notifs": []}
for x in all_notif:
return_val['notifs'].append(model_to_dict(x))
return_val["notifs"].append(model_to_dict(x))
return return_val
@router.get('/{notif_id}')
@router.get("/{notif_id}")
async def get_one_notif(notif_id, csv: Optional[bool] = None):
try:
this_notif = Notification.get_by_id(notif_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No notification found with id {notif_id}')
raise HTTPException(
status_code=404, detail=f"No notification found with id {notif_id}"
)
if csv:
data_list = [
['id', 'created', 'title', 'desc', 'field_name', 'message', 'about', 'ack'],
[this_notif.id, this_notif.created, this_notif.title, this_notif.desc, this_notif.field_name,
this_notif.message, this_notif.about, this_notif.ack]
["id", "created", "title", "desc", "field_name", "message", "about", "ack"],
[
this_notif.id,
this_notif.created,
this_notif.title,
this_notif.desc,
this_notif.field_name,
this_notif.message,
this_notif.about,
this_notif.ack,
],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = model_to_dict(this_notif)
return return_val
@router.post('')
@router.post("")
async def post_notif(notif: NotifModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post notifications. This event has been logged.'
detail="You are not authorized to post notifications. This event has been logged.",
)
logging.info(f'new notif: {notif}')
logging.info(f"new notif: {notif}")
this_notif = Notification(
created=datetime.fromtimestamp(notif.created / 1000),
title=notif.title,
@ -118,25 +154,34 @@ async def post_notif(notif: NotifModel, token: str = Depends(oauth2_scheme)):
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that notification'
detail="Well slap my ass and call me a teapot; I could not save that notification",
)
@router.patch('/{notif_id}')
@router.patch("/{notif_id}")
async def patch_notif(
notif_id, created: Optional[int] = None, title: Optional[str] = None, desc: Optional[str] = None,
field_name: Optional[str] = None, message: Optional[str] = None, about: Optional[str] = None,
ack: Optional[bool] = None, token: str = Depends(oauth2_scheme)):
notif_id,
created: Optional[int] = None,
title: Optional[str] = None,
desc: Optional[str] = None,
field_name: Optional[str] = None,
message: Optional[str] = None,
about: Optional[str] = None,
ack: Optional[bool] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to patch notifications. This event has been logged.'
detail="You are not authorized to patch notifications. This event has been logged.",
)
try:
this_notif = Notification.get_by_id(notif_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No notification found with id {notif_id}')
raise HTTPException(
status_code=404, detail=f"No notification found with id {notif_id}"
)
if title is not None:
this_notif.title = title
@ -159,26 +204,32 @@ async def patch_notif(
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that rarity'
detail="Well slap my ass and call me a teapot; I could not save that rarity",
)
@router.delete('/{notif_id}')
@router.delete("/{notif_id}")
async def delete_notif(notif_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete notifications. This event has been logged.'
detail="You are not authorized to delete notifications. This event has been logged.",
)
try:
this_notif = Notification.get_by_id(notif_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No notification found with id {notif_id}')
raise HTTPException(
status_code=404, detail=f"No notification found with id {notif_id}"
)
count = this_notif.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Notification {notif_id} has been deleted')
raise HTTPException(
status_code=200, detail=f"Notification {notif_id} has been deleted"
)
else:
raise HTTPException(status_code=500, detail=f'Notification {notif_id} was not deleted')
raise HTTPException(
status_code=500, detail=f"Notification {notif_id} was not deleted"
)

View File

@ -9,34 +9,36 @@ from ..db_engine import Paperdex, model_to_dict, Player, Cardset, Team, DoesNotE
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/paperdex',
tags=['paperdex']
)
router = APIRouter(prefix="/api/v2/paperdex", tags=["paperdex"])
class PaperdexModel(pydantic.BaseModel):
team_id: int
player_id: int
created: Optional[int] = int(datetime.timestamp(datetime.now())*1000)
created: Optional[int] = int(datetime.timestamp(datetime.now()) * 1000)
@router.get('')
@router.get("")
async def get_paperdex(
team_id: Optional[int] = None, player_id: Optional[int] = None, created_after: Optional[int] = None,
cardset_id: Optional[int] = None, created_before: Optional[int] = None, flat: Optional[bool] = False,
csv: Optional[bool] = None):
team_id: Optional[int] = None,
player_id: Optional[int] = None,
created_after: Optional[int] = None,
cardset_id: Optional[int] = None,
created_before: Optional[int] = None,
flat: Optional[bool] = False,
csv: Optional[bool] = None,
limit: int = 100,
):
all_dex = Paperdex.select().join(Player).join(Cardset).order_by(Paperdex.id)
if all_dex.count() == 0:
raise HTTPException(status_code=404, detail=f'There are no paperdex to filter')
raise HTTPException(status_code=404, detail="There are no paperdex to filter")
if team_id is not None:
all_dex = all_dex.where(Paperdex.team_id == team_id)
if player_id is not None:
all_dex = all_dex.where(Paperdex.player_id == player_id)
if cardset_id is not None:
all_sets = Cardset.select().where(Cardset.id == cardset_id)
all_dex = all_dex.where(Paperdex.player.cardset.id == cardset_id)
if created_after is not None:
# Convert milliseconds timestamp to datetime for PostgreSQL comparison
@ -51,57 +53,62 @@ async def get_paperdex(
# db.close()
# raise HTTPException(status_code=404, detail=f'No paperdex found')
limit = max(0, min(limit, 500))
all_dex = all_dex.limit(limit)
if csv:
data_list = [['id', 'team_id', 'player_id', 'created']]
data_list = [["id", "team_id", "player_id", "created"]]
for line in all_dex:
data_list.append(
[
line.id, line.team.id, line.player.player_id, line.created
]
[line.id, line.team.id, line.player.player_id, line.created]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_dex.count(), 'paperdex': []}
return_val = {"count": all_dex.count(), "paperdex": []}
for x in all_dex:
return_val['paperdex'].append(model_to_dict(x, recurse=not flat))
return_val["paperdex"].append(model_to_dict(x, recurse=not flat))
return return_val
@router.get('/{paperdex_id}')
@router.get("/{paperdex_id}")
async def get_one_paperdex(paperdex_id, csv: Optional[bool] = False):
try:
this_dex = Paperdex.get_by_id(paperdex_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No paperdex found with id {paperdex_id}')
raise HTTPException(
status_code=404, detail=f"No paperdex found with id {paperdex_id}"
)
if csv:
data_list = [
['id', 'team_id', 'player_id', 'created'],
[this_dex.id, this_dex.team.id, this_dex.player.id, this_dex.created]
["id", "team_id", "player_id", "created"],
[this_dex.id, this_dex.team.id, this_dex.player.id, this_dex.created],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = model_to_dict(this_dex)
return return_val
@router.post('')
@router.post("")
async def post_paperdex(paperdex: PaperdexModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post paperdex. This event has been logged.'
detail="You are not authorized to post paperdex. This event has been logged.",
)
dupe_dex = Paperdex.get_or_none(Paperdex.team_id == paperdex.team_id, Paperdex.player_id == paperdex.player_id)
dupe_dex = Paperdex.get_or_none(
Paperdex.team_id == paperdex.team_id, Paperdex.player_id == paperdex.player_id
)
if dupe_dex:
return_val = model_to_dict(dupe_dex)
return return_val
@ -109,7 +116,7 @@ async def post_paperdex(paperdex: PaperdexModel, token: str = Depends(oauth2_sch
this_dex = Paperdex(
team_id=paperdex.team_id,
player_id=paperdex.player_id,
created=datetime.fromtimestamp(paperdex.created / 1000)
created=datetime.fromtimestamp(paperdex.created / 1000),
)
saved = this_dex.save()
@ -119,24 +126,30 @@ async def post_paperdex(paperdex: PaperdexModel, token: str = Depends(oauth2_sch
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that dex'
detail="Well slap my ass and call me a teapot; I could not save that dex",
)
@router.patch('/{paperdex_id}')
@router.patch("/{paperdex_id}")
async def patch_paperdex(
paperdex_id, team_id: Optional[int] = None, player_id: Optional[int] = None, created: Optional[int] = None,
token: str = Depends(oauth2_scheme)):
paperdex_id,
team_id: Optional[int] = None,
player_id: Optional[int] = None,
created: Optional[int] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to patch paperdex. This event has been logged.'
detail="You are not authorized to patch paperdex. This event has been logged.",
)
try:
this_dex = Paperdex.get_by_id(paperdex_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No paperdex found with id {paperdex_id}')
raise HTTPException(
status_code=404, detail=f"No paperdex found with id {paperdex_id}"
)
if team_id is not None:
this_dex.team_id = team_id
@ -151,40 +164,43 @@ async def patch_paperdex(
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that rarity'
detail="Well slap my ass and call me a teapot; I could not save that rarity",
)
@router.delete('/{paperdex_id}')
@router.delete("/{paperdex_id}")
async def delete_paperdex(paperdex_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete rewards. This event has been logged.'
detail="You are not authorized to delete rewards. This event has been logged.",
)
try:
this_dex = Paperdex.get_by_id(paperdex_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No paperdex found with id {paperdex_id}')
raise HTTPException(
status_code=404, detail=f"No paperdex found with id {paperdex_id}"
)
count = this_dex.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Paperdex {this_dex} has been deleted')
else:
raise HTTPException(status_code=500, detail=f'Paperdex {this_dex} was not deleted')
@router.post('/wipe-ai')
async def wipe_ai_paperdex(token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
raise HTTPException(
status_code=401,
detail='Unauthorized'
status_code=200, detail=f"Paperdex {this_dex} has been deleted"
)
else:
raise HTTPException(
status_code=500, detail=f"Paperdex {this_dex} was not deleted"
)
g_teams = Team.select().where(Team.abbrev.contains('Gauntlet'))
@router.post("/wipe-ai")
async def wipe_ai_paperdex(token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
g_teams = Team.select().where(Team.abbrev.contains("Gauntlet"))
count = Paperdex.delete().where(Paperdex.team << g_teams).execute()
return f'Deleted {count} records'
return f"Deleted {count} records"

View File

@ -143,6 +143,7 @@ async def get_card_ratings(
short_output: bool = False,
csv: bool = False,
cardset_id: list = Query(default=None),
limit: int = 100,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
@ -168,13 +169,16 @@ async def get_card_ratings(
)
all_ratings = all_ratings.where(PitchingCardRatings.pitchingcard << set_cards)
total_count = all_ratings.count() if not csv else 0
all_ratings = all_ratings.limit(max(0, min(limit, 500)))
if csv:
return_val = query_to_csv(all_ratings)
return Response(content=return_val, media_type="text/csv")
else:
return_val = {
"count": all_ratings.count(),
"count": total_count,
"ratings": [
model_to_dict(x, recurse=not short_output) for x in all_ratings
],
@ -231,10 +235,10 @@ def get_scouting_dfs(cardset_id: list = None):
series_list = [
pd.Series(
dict([(x.player.player_id, x.range) for x in positions]), name=f"Range P"
dict([(x.player.player_id, x.range) for x in positions]), name="Range P"
),
pd.Series(
dict([(x.player.player_id, x.error) for x in positions]), name=f"Error P"
dict([(x.player.player_id, x.error) for x in positions]), name="Error P"
),
]
logging.debug(f"series_list: {series_list}")
@ -274,7 +278,7 @@ async def post_calc_scouting(token: str = Depends(oauth2_scheme)):
status_code=401, detail="You are not authorized to calculate card ratings."
)
logging.warning(f"Re-calculating pitching ratings\n\n")
logging.warning("Re-calculating pitching ratings\n\n")
output = get_scouting_dfs()
first = ["player_id", "player_name", "cardset_name", "rarity", "hand", "variant"]
@ -310,7 +314,7 @@ async def post_calc_basic(token: str = Depends(oauth2_scheme)):
status_code=401, detail="You are not authorized to calculate basic ratings."
)
logging.warning(f"Re-calculating basic pitching ratings\n\n")
logging.warning("Re-calculating basic pitching ratings\n\n")
raw_data = get_scouting_dfs()
logging.debug(f"output: {raw_data}")

View File

@ -5,14 +5,19 @@ import logging
import pydantic
from pandas import DataFrame
from ..db_engine import db, PitchingStat, model_to_dict, Card, Player, Current, DoesNotExist
from ..db_engine import (
db,
PitchingStat,
model_to_dict,
Card,
Player,
Current,
DoesNotExist,
)
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/pitstats',
tags=['pitstats']
)
router = APIRouter(prefix="/api/v2/pitstats", tags=["pitstats"])
class PitStat(pydantic.BaseModel):
@ -40,7 +45,7 @@ class PitStat(pydantic.BaseModel):
bsv: Optional[int] = 0
week: int
season: int
created: Optional[int] = int(datetime.timestamp(datetime.now())*1000)
created: Optional[int] = int(datetime.timestamp(datetime.now()) * 1000)
game_id: int
@ -48,13 +53,23 @@ class PitchingStatModel(pydantic.BaseModel):
stats: List[PitStat]
@router.get('')
@router.get("")
async def get_pit_stats(
card_id: int = None, player_id: int = None, team_id: int = None, vs_team_id: int = None, week: int = None,
season: int = None, week_start: int = None, week_end: int = None, created: int = None, gs: bool = None,
csv: bool = None):
card_id: int = None,
player_id: int = None,
team_id: int = None,
vs_team_id: int = None,
week: int = None,
season: int = None,
week_start: int = None,
week_end: int = None,
created: int = None,
gs: bool = None,
csv: bool = None,
limit: Optional[int] = 100,
):
all_stats = PitchingStat.select().join(Card).join(Player).order_by(PitchingStat.id)
logging.debug(f'pit query:\n\n{all_stats}')
logging.debug(f"pit query:\n\n{all_stats}")
if season is not None:
all_stats = all_stats.where(PitchingStat.season == season)
@ -83,43 +98,100 @@ async def get_pit_stats(
if gs is not None:
all_stats = all_stats.where(PitchingStat.gs == 1 if gs else 0)
total_count = all_stats.count() if not csv else 0
all_stats = all_stats.limit(max(0, min(limit, 500)))
# if all_stats.count() == 0:
# db.close()
# raise HTTPException(status_code=404, detail=f'No pitching stats found')
if csv:
data_list = [['id', 'card_id', 'player_id', 'cardset', 'team', 'vs_team', 'ip', 'hit', 'run', 'erun', 'so', 'bb', 'hbp',
'wp', 'balk', 'hr', 'ir', 'irs', 'gs', 'win', 'loss', 'hold', 'sv', 'bsv', 'week', 'season',
'created', 'game_id', 'roster_num']]
data_list = [
[
"id",
"card_id",
"player_id",
"cardset",
"team",
"vs_team",
"ip",
"hit",
"run",
"erun",
"so",
"bb",
"hbp",
"wp",
"balk",
"hr",
"ir",
"irs",
"gs",
"win",
"loss",
"hold",
"sv",
"bsv",
"week",
"season",
"created",
"game_id",
"roster_num",
]
]
for line in all_stats:
data_list.append(
[
line.id, line.card.id, line.card.player.player_id, line.card.player.cardset.name, line.team.abbrev,
line.vs_team.abbrev, line.ip, line.hit,
line.run, line.erun, line.so, line.bb, line.hbp, line.wp, line.balk, line.hr, line.ir, line.irs,
line.gs, line.win, line.loss, line.hold, line.sv, line.bsv, line.week, line.season, line.created,
line.game_id, line.roster_num
line.id,
line.card.id,
line.card.player.player_id,
line.card.player.cardset.name,
line.team.abbrev,
line.vs_team.abbrev,
line.ip,
line.hit,
line.run,
line.erun,
line.so,
line.bb,
line.hbp,
line.wp,
line.balk,
line.hr,
line.ir,
line.irs,
line.gs,
line.win,
line.loss,
line.hold,
line.sv,
line.bsv,
line.week,
line.season,
line.created,
line.game_id,
line.roster_num,
]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_stats.count(), 'stats': []}
return_val = {"count": total_count, "stats": []}
for x in all_stats:
return_val['stats'].append(model_to_dict(x, recurse=False))
return_val["stats"].append(model_to_dict(x, recurse=False))
return return_val
@router.post('')
@router.post("")
async def post_pitstat(stats: PitchingStatModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post stats. This event has been logged.'
detail="You are not authorized to post stats. This event has been logged.",
)
new_stats = []
@ -149,33 +221,37 @@ async def post_pitstat(stats: PitchingStatModel, token: str = Depends(oauth2_sch
bsv=x.bsv,
week=x.week,
season=x.season,
created=datetime.fromtimestamp(x.created / 1000) if x.created else datetime.now(),
game_id=x.game_id
created=datetime.fromtimestamp(x.created / 1000)
if x.created
else datetime.now(),
game_id=x.game_id,
)
new_stats.append(this_stat)
with db.atomic():
PitchingStat.bulk_create(new_stats, batch_size=15)
raise HTTPException(status_code=200, detail=f'{len(new_stats)} pitching lines have been added')
raise HTTPException(
status_code=200, detail=f"{len(new_stats)} pitching lines have been added"
)
@router.delete('/{stat_id}')
@router.delete("/{stat_id}")
async def delete_pitstat(stat_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete stats. This event has been logged.'
detail="You are not authorized to delete stats. This event has been logged.",
)
try:
this_stat = PitchingStat.get_by_id(stat_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No stat found with id {stat_id}')
raise HTTPException(status_code=404, detail=f"No stat found with id {stat_id}")
count = this_stat.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Stat {stat_id} has been deleted')
raise HTTPException(status_code=200, detail=f"Stat {stat_id} has been deleted")
else:
raise HTTPException(status_code=500, detail=f'Stat {stat_id} was not deleted')
raise HTTPException(status_code=500, detail=f"Stat {stat_id} was not deleted")

View File

@ -32,6 +32,7 @@ from ..db_engine import (
)
from ..db_helpers import upsert_players
from ..dependencies import oauth2_scheme, valid_token
from ..services.refractor_boost import compute_variant_hash
# ---------------------------------------------------------------------------
# Persistent browser instance (WP-02)
@ -132,6 +133,19 @@ def normalize_franchise(franchise: str) -> str:
return FRANCHISE_NORMALIZE.get(titled, titled)
def resolve_refractor_tier(player_id: int, variant: int) -> int:
"""Determine the refractor tier (0-4) from a player's variant hash.
Pure math no DB query needed. Returns 0 for base cards or unknown variants.
"""
if variant == 0:
return 0
for tier in range(1, 5):
if compute_variant_hash(player_id, tier) == variant:
return tier
return 0
router = APIRouter(prefix="/api/v2/players", tags=["players"])
@ -723,6 +737,9 @@ async def get_batter_card(
variant: int = 0,
d: str = None,
html: Optional[bool] = False,
tier: Optional[int] = Query(
None, ge=0, le=4, description="Override refractor tier for preview (dev only)"
),
):
try:
this_player = Player.get_by_id(player_id)
@ -740,6 +757,7 @@ async def get_batter_card(
f"storage/cards/cardset-{this_player.cardset.id}/{card_type}/{player_id}-{d}-v{variant}.png"
)
and html is False
and tier is None
):
return FileResponse(
path=f"storage/cards/cardset-{this_player.cardset.id}/{card_type}/{player_id}-{d}-v{variant}.png",
@ -786,6 +804,9 @@ async def get_batter_card(
card_data["cardset_name"] = this_player.cardset.name
else:
card_data["cardset_name"] = this_player.description
card_data["refractor_tier"] = (
tier if tier is not None else resolve_refractor_tier(player_id, variant)
)
card_data["request"] = request
html_response = templates.TemplateResponse("player_card.html", card_data)
@ -823,6 +844,9 @@ async def get_batter_card(
card_data["cardset_name"] = this_player.cardset.name
else:
card_data["cardset_name"] = this_player.description
card_data["refractor_tier"] = (
tier if tier is not None else resolve_refractor_tier(player_id, variant)
)
card_data["request"] = request
html_response = templates.TemplateResponse("player_card.html", card_data)

434
app/routers_v2/refractor.py Normal file
View File

@ -0,0 +1,434 @@
import os
from fastapi import APIRouter, Depends, HTTPException, Query
import logging
from typing import Optional
from ..db_engine import model_to_dict
from ..dependencies import oauth2_scheme, valid_token
from ..services.refractor_init import initialize_card_refractor, _determine_card_type
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/v2/refractor", tags=["refractor"])
# Tier -> threshold attribute name. Index = current_tier; value is the
# attribute on RefractorTrack whose value is the *next* threshold to reach.
# Tier 4 is fully evolved so there is no next threshold (None sentinel).
_NEXT_THRESHOLD_ATTR = {
0: "t1_threshold",
1: "t2_threshold",
2: "t3_threshold",
3: "t4_threshold",
4: None,
}
def _build_card_state_response(state, player_name=None) -> dict:
"""Serialise a RefractorCardState into the standard API response shape.
Produces a flat dict with player_id and team_id as plain integers,
a nested 'track' dict with all threshold fields, and computed fields:
- 'next_threshold': threshold for the tier immediately above (None when fully evolved).
- 'progress_pct': current_value / next_threshold * 100, rounded to 1 decimal
(None when fully evolved or next_threshold is zero).
- 'player_name': included when passed (e.g. from a list join); omitted otherwise.
Uses model_to_dict(recurse=False) internally so FK fields are returned
as IDs rather than nested objects, then promotes the needed IDs up to
the top level.
"""
track = state.track
track_dict = model_to_dict(track, recurse=False)
next_attr = _NEXT_THRESHOLD_ATTR.get(state.current_tier)
next_threshold = getattr(track, next_attr) if next_attr else None
progress_pct = None
if next_threshold is not None and next_threshold > 0:
progress_pct = round((state.current_value / next_threshold) * 100, 1)
result = {
"player_id": state.player_id,
"team_id": state.team_id,
"current_tier": state.current_tier,
"current_value": state.current_value,
"fully_evolved": state.fully_evolved,
"last_evaluated_at": (
state.last_evaluated_at.isoformat()
if hasattr(state.last_evaluated_at, "isoformat")
else state.last_evaluated_at or None
),
"track": track_dict,
"next_threshold": next_threshold,
"progress_pct": progress_pct,
}
if player_name is not None:
result["player_name"] = player_name
return result
@router.get("/tracks")
async def list_tracks(
card_type: Optional[str] = Query(default=None),
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import RefractorTrack
query = RefractorTrack.select()
if card_type is not None:
query = query.where(RefractorTrack.card_type == card_type)
items = [model_to_dict(t, recurse=False) for t in query]
return {"count": len(items), "items": items}
@router.get("/tracks/{track_id}")
async def get_track(track_id: int, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import RefractorTrack
try:
track = RefractorTrack.get_by_id(track_id)
except Exception:
raise HTTPException(status_code=404, detail=f"Track {track_id} not found")
return model_to_dict(track, recurse=False)
@router.get("/cards")
async def list_card_states(
team_id: int = Query(...),
card_type: Optional[str] = Query(default=None),
tier: Optional[int] = Query(default=None, ge=0, le=4),
season: Optional[int] = Query(default=None),
progress: Optional[str] = Query(default=None),
evaluated_only: bool = Query(default=True),
limit: int = Query(default=10, ge=1, le=100),
offset: int = Query(default=0, ge=0),
token: str = Depends(oauth2_scheme),
):
"""List RefractorCardState rows for a team, with optional filters and pagination.
Required:
team_id -- filter to this team's cards; returns empty list if team has no states
Optional filters:
card_type -- one of 'batter', 'sp', 'rp'; filters by RefractorTrack.card_type
tier -- filter by current_tier (0-4)
season -- filter to players who have batting or pitching season stats in that
season (EXISTS subquery against batting/pitching_season_stats)
progress -- 'close' = only cards within 80% of their next tier threshold;
fully evolved cards are always excluded from this filter
evaluated_only -- default True; when True, excludes cards where last_evaluated_at
is NULL (cards created but never run through the evaluator).
Set to False to include all rows, including zero-value placeholders.
Pagination:
limit -- page size (1-100, default 10)
offset -- items to skip (default 0)
Response: {"count": N, "items": [...]}
count is the total matching rows before limit/offset.
Each item includes player_name and progress_pct in addition to the
standard single-card response fields.
Sort order: current_tier DESC, current_value DESC.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import (
RefractorCardState,
RefractorTrack,
Player,
BattingSeasonStats,
PitchingSeasonStats,
fn,
Case,
JOIN,
)
query = (
RefractorCardState.select(RefractorCardState, RefractorTrack, Player)
.join(RefractorTrack)
.switch(RefractorCardState)
.join(
Player, JOIN.LEFT_OUTER, on=(RefractorCardState.player == Player.player_id)
)
.where(RefractorCardState.team == team_id)
.order_by(
RefractorCardState.current_tier.desc(),
RefractorCardState.current_value.desc(),
)
)
if card_type is not None:
query = query.where(RefractorTrack.card_type == card_type)
if tier is not None:
query = query.where(RefractorCardState.current_tier == tier)
if season is not None:
batter_exists = BattingSeasonStats.select().where(
(BattingSeasonStats.player == RefractorCardState.player)
& (BattingSeasonStats.team == RefractorCardState.team)
& (BattingSeasonStats.season == season)
)
pitcher_exists = PitchingSeasonStats.select().where(
(PitchingSeasonStats.player == RefractorCardState.player)
& (PitchingSeasonStats.team == RefractorCardState.team)
& (PitchingSeasonStats.season == season)
)
query = query.where(fn.EXISTS(batter_exists) | fn.EXISTS(pitcher_exists))
if progress == "close":
next_threshold_expr = Case(
RefractorCardState.current_tier,
(
(0, RefractorTrack.t1_threshold),
(1, RefractorTrack.t2_threshold),
(2, RefractorTrack.t3_threshold),
(3, RefractorTrack.t4_threshold),
),
None,
)
query = query.where(
(RefractorCardState.fully_evolved == False) # noqa: E712
& (RefractorCardState.current_value >= next_threshold_expr * 0.8)
)
if evaluated_only:
query = query.where(RefractorCardState.last_evaluated_at.is_null(False))
total = query.count()
items = []
for state in query.offset(offset).limit(limit):
player_name = None
try:
player_name = state.player.p_name
except Exception:
pass
items.append(_build_card_state_response(state, player_name=player_name))
return {"count": total, "items": items}
@router.get("/cards/{card_id}")
async def get_card_state(card_id: int, token: str = Depends(oauth2_scheme)):
"""Return the RefractorCardState for a card identified by its Card.id.
Resolves card_id -> (player_id, team_id) via the Card table, then looks
up the matching RefractorCardState row. Because duplicate cards for the
same player+team share one state row (unique-(player,team) constraint),
any card_id belonging to that player on that team returns the same state.
Returns 404 when:
- The card_id does not exist in the Card table.
- The card exists but has no corresponding RefractorCardState yet.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import Card, RefractorCardState, RefractorTrack, DoesNotExist
# Resolve card_id to player+team
try:
card = Card.get_by_id(card_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f"Card {card_id} not found")
# Look up the refractor state for this (player, team) pair, joining the
# track so a single query resolves both rows.
try:
state = (
RefractorCardState.select(RefractorCardState, RefractorTrack)
.join(RefractorTrack)
.where(
(RefractorCardState.player == card.player_id)
& (RefractorCardState.team == card.team_id)
)
.get()
)
except DoesNotExist:
raise HTTPException(
status_code=404,
detail=f"No refractor state for card {card_id}",
)
return _build_card_state_response(state)
@router.post("/cards/{card_id}/evaluate")
async def evaluate_card(card_id: int, token: str = Depends(oauth2_scheme)):
"""Force-recalculate refractor state for a card from career stats.
Resolves card_id to (player_id, team_id), then recomputes the refractor
tier from all player_season_stats rows for that pair. Idempotent.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import Card
from ..services.refractor_evaluator import evaluate_card as _evaluate
try:
card = Card.get_by_id(card_id)
except Exception:
raise HTTPException(status_code=404, detail=f"Card {card_id} not found")
try:
result = _evaluate(card.player_id, card.team_id)
except ValueError as exc:
raise HTTPException(status_code=404, detail=str(exc))
return result
@router.post("/evaluate-game/{game_id}")
async def evaluate_game(game_id: int, token: str = Depends(oauth2_scheme)):
"""Evaluate refractor state for all players who appeared in a game.
Finds all unique (player_id, team_id) pairs from the game's StratPlay rows,
then for each pair that has a RefractorCardState, re-computes the refractor
tier. Pairs without a state row are auto-initialized on-the-fly via
initialize_card_refractor (idempotent). Per-player errors are logged but
do not abort the batch.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import RefractorCardState, Player, StratPlay
from ..services.refractor_boost import apply_tier_boost
from ..services.refractor_evaluator import evaluate_card
plays = list(StratPlay.select().where(StratPlay.game == game_id))
pairs: set[tuple[int, int]] = set()
for play in plays:
if play.batter_id is not None:
pairs.add((play.batter_id, play.batter_team_id))
if play.pitcher_id is not None:
pairs.add((play.pitcher_id, play.pitcher_team_id))
evaluated = 0
tier_ups = []
boost_enabled = os.environ.get("REFRACTOR_BOOST_ENABLED", "true").lower() != "false"
for player_id, team_id in pairs:
try:
state = RefractorCardState.get_or_none(
(RefractorCardState.player_id == player_id)
& (RefractorCardState.team_id == team_id)
)
if state is None:
try:
player = Player.get_by_id(player_id)
card_type = _determine_card_type(player)
state = initialize_card_refractor(player_id, team_id, card_type)
except Exception:
logger.warning(
f"Refractor auto-init failed for player={player_id} "
f"team={team_id} — skipping"
)
if state is None:
continue
old_tier = state.current_tier
# Use dry_run=True so that current_tier is NOT written here.
# apply_tier_boost() writes current_tier + variant atomically on
# tier-up. If no tier-up occurs, apply_tier_boost is not called
# and the tier stays at old_tier (correct behaviour).
result = evaluate_card(player_id, team_id, dry_run=True)
evaluated += 1
# Use computed_tier (what the formula says) to detect tier-ups.
computed_tier = result.get("computed_tier", old_tier)
if computed_tier > old_tier:
player_name = "Unknown"
try:
p = Player.get_by_id(player_id)
player_name = p.p_name
except Exception:
pass
# Phase 2: Apply rating boosts for each tier gained.
# apply_tier_boost() writes current_tier + variant atomically.
# If it fails, current_tier stays at old_tier — automatic retry next game.
boost_result = None
if not boost_enabled:
# Boost disabled via REFRACTOR_BOOST_ENABLED=false.
# Skip notification — current_tier was not written (dry_run),
# so reporting a tier-up would be a false notification.
continue
card_type = state.track.card_type if state.track else None
if card_type:
last_successful_tier = old_tier
failing_tier = old_tier + 1
try:
for tier in range(old_tier + 1, computed_tier + 1):
failing_tier = tier
boost_result = apply_tier_boost(
player_id, team_id, tier, card_type
)
last_successful_tier = tier
except Exception as boost_exc:
logger.warning(
f"Refractor boost failed for player={player_id} "
f"team={team_id} tier={failing_tier}: {boost_exc}"
)
# Report only the tiers that actually succeeded.
# If none succeeded, skip the tier_up notification entirely.
if last_successful_tier == old_tier:
continue
# At least one intermediate tier was committed; report that.
computed_tier = last_successful_tier
else:
# No card_type means no track — skip boost and skip notification.
# A false tier-up notification must not be sent when the boost
# was never applied (current_tier was never written to DB).
logger.warning(
f"Refractor boost skipped for player={player_id} "
f"team={team_id}: no card_type on track"
)
continue
tier_up_entry = {
"player_id": player_id,
"team_id": team_id,
"player_name": player_name,
"old_tier": old_tier,
"new_tier": computed_tier,
"current_value": result.get("current_value", 0),
"track_name": state.track.name if state.track else "Unknown",
}
# Non-breaking addition: include boost info when available.
if boost_result:
tier_up_entry["variant_created"] = boost_result.get(
"variant_created"
)
tier_ups.append(tier_up_entry)
except Exception as exc:
logger.warning(
f"Refractor eval failed for player={player_id} team={team_id}: {exc}"
)
return {"evaluated": evaluated, "tier_ups": tier_ups}

View File

@ -8,10 +8,7 @@ from ..db_engine import Result, model_to_dict, Team, DataError, DoesNotExist
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/results',
tags=['results']
)
router = APIRouter(prefix="/api/v2/results", tags=["results"])
class ResultModel(pydantic.BaseModel):
@ -31,15 +28,29 @@ class ResultModel(pydantic.BaseModel):
game_type: str
@router.get('')
@router.get("")
async def get_results(
away_team_id: Optional[int] = None, home_team_id: Optional[int] = None, team_one_id: Optional[int] = None,
team_two_id: Optional[int] = None, away_score_min: Optional[int] = None, away_score_max: Optional[int] = None,
home_score_min: Optional[int] = None, home_score_max: Optional[int] = None, bothscore_min: Optional[int] = None,
bothscore_max: Optional[int] = None, season: Optional[int] = None, week: Optional[int] = None,
week_start: Optional[int] = None, week_end: Optional[int] = None, ranked: Optional[bool] = None,
short_game: Optional[bool] = None, game_type: Optional[str] = None, vs_ai: Optional[bool] = None,
csv: Optional[bool] = None):
away_team_id: Optional[int] = None,
home_team_id: Optional[int] = None,
team_one_id: Optional[int] = None,
team_two_id: Optional[int] = None,
away_score_min: Optional[int] = None,
away_score_max: Optional[int] = None,
home_score_min: Optional[int] = None,
home_score_max: Optional[int] = None,
bothscore_min: Optional[int] = None,
bothscore_max: Optional[int] = None,
season: Optional[int] = None,
week: Optional[int] = None,
week_start: Optional[int] = None,
week_end: Optional[int] = None,
ranked: Optional[bool] = None,
short_game: Optional[bool] = None,
game_type: Optional[str] = None,
vs_ai: Optional[bool] = None,
csv: Optional[bool] = None,
limit: int = 100,
):
all_results = Result.select()
# if all_results.count() == 0:
@ -51,28 +62,40 @@ async def get_results(
this_team = Team.get_by_id(away_team_id)
all_results = all_results.where(Result.away_team == this_team)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No team found with id {away_team_id}')
raise HTTPException(
status_code=404, detail=f"No team found with id {away_team_id}"
)
if home_team_id is not None:
try:
this_team = Team.get_by_id(home_team_id)
all_results = all_results.where(Result.home_team == this_team)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No team found with id {home_team_id}')
raise HTTPException(
status_code=404, detail=f"No team found with id {home_team_id}"
)
if team_one_id is not None:
try:
this_team = Team.get_by_id(team_one_id)
all_results = all_results.where((Result.home_team == this_team) | (Result.away_team == this_team))
all_results = all_results.where(
(Result.home_team == this_team) | (Result.away_team == this_team)
)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No team found with id {team_one_id}')
raise HTTPException(
status_code=404, detail=f"No team found with id {team_one_id}"
)
if team_two_id is not None:
try:
this_team = Team.get_by_id(team_two_id)
all_results = all_results.where((Result.home_team == this_team) | (Result.away_team == this_team))
all_results = all_results.where(
(Result.home_team == this_team) | (Result.away_team == this_team)
)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No team found with id {team_two_id}')
raise HTTPException(
status_code=404, detail=f"No team found with id {team_two_id}"
)
if away_score_min is not None:
all_results = all_results.where(Result.away_score >= away_score_min)
@ -87,10 +110,14 @@ async def get_results(
all_results = all_results.where(Result.home_score <= home_score_max)
if bothscore_min is not None:
all_results = all_results.where((Result.home_score >= bothscore_min) & (Result.away_score >= bothscore_min))
all_results = all_results.where(
(Result.home_score >= bothscore_min) & (Result.away_score >= bothscore_min)
)
if bothscore_max is not None:
all_results = all_results.where((Result.home_score <= bothscore_max) & (Result.away_score <= bothscore_max))
all_results = all_results.where(
(Result.home_score <= bothscore_max) & (Result.away_score <= bothscore_max)
)
if season is not None:
all_results = all_results.where(Result.season == season)
@ -114,6 +141,9 @@ async def get_results(
all_results = all_results.where(Result.game_type == game_type)
all_results = all_results.order_by(Result.id)
limit = max(0, min(limit, 500))
total_count = all_results.count() if not csv else 0
all_results = all_results.limit(limit)
# Not functional
# if vs_ai is not None:
# AwayTeam = Team.alias()
@ -134,60 +164,115 @@ async def get_results(
# logging.info(f'Result Query:\n\n{all_results}')
if csv:
data_list = [['id', 'away_abbrev', 'home_abbrev', 'away_score', 'home_score', 'away_tv', 'home_tv',
'game_type', 'season', 'week', 'short_game', 'ranked']]
data_list = [
[
"id",
"away_abbrev",
"home_abbrev",
"away_score",
"home_score",
"away_tv",
"home_tv",
"game_type",
"season",
"week",
"short_game",
"ranked",
]
]
for line in all_results:
data_list.append([
line.id, line.away_team.abbrev, line.home_team.abbrev, line.away_score, line.home_score,
line.away_team_value, line.home_team_value, line.game_type if line.game_type else 'minor-league',
line.season, line.week, line.short_game, line.ranked
])
data_list.append(
[
line.id,
line.away_team.abbrev,
line.home_team.abbrev,
line.away_score,
line.home_score,
line.away_team_value,
line.home_team_value,
line.game_type if line.game_type else "minor-league",
line.season,
line.week,
line.short_game,
line.ranked,
]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_results.count(), 'results': []}
return_val = {"count": total_count, "results": []}
for x in all_results:
return_val['results'].append(model_to_dict(x))
return_val["results"].append(model_to_dict(x))
return return_val
@router.get('/{result_id}')
@router.get("/{result_id}")
async def get_one_results(result_id, csv: Optional[bool] = None):
try:
this_result = Result.get_by_id(result_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No result found with id {result_id}')
raise HTTPException(
status_code=404, detail=f"No result found with id {result_id}"
)
if csv:
data_list = [
['id', 'away_abbrev', 'home_abbrev', 'away_score', 'home_score', 'away_tv', 'home_tv', 'game_type',
'season', 'week', 'game_type'],
[this_result.id, this_result.away_team.abbrev, this_result.away_team.abbrev, this_result.away_score,
this_result.home_score, this_result.away_team_value, this_result.home_team_value,
this_result.game_type if this_result.game_type else 'minor-league',
this_result.season, this_result.week, this_result.game_type]
[
"id",
"away_abbrev",
"home_abbrev",
"away_score",
"home_score",
"away_tv",
"home_tv",
"game_type",
"season",
"week",
"game_type",
],
[
this_result.id,
this_result.away_team.abbrev,
this_result.away_team.abbrev,
this_result.away_score,
this_result.home_score,
this_result.away_team_value,
this_result.home_team_value,
this_result.game_type if this_result.game_type else "minor-league",
this_result.season,
this_result.week,
this_result.game_type,
],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = model_to_dict(this_result)
return return_val
@router.get('/team/{team_id}')
@router.get("/team/{team_id}")
async def get_team_results(
team_id: int, season: Optional[int] = None, week: Optional[int] = None, csv: Optional[bool] = False):
all_results = Result.select().where((Result.away_team_id == team_id) | (Result.home_team_id == team_id)).order_by(Result.id)
team_id: int,
season: Optional[int] = None,
week: Optional[int] = None,
csv: Optional[bool] = False,
):
all_results = (
Result.select()
.where((Result.away_team_id == team_id) | (Result.home_team_id == team_id))
.order_by(Result.id)
)
try:
this_team = Team.get_by_id(team_id)
except DoesNotExist as e:
logging.error(f'Unknown team id {team_id} trying to pull team results')
raise HTTPException(404, f'Team id {team_id} not found')
except DoesNotExist:
logging.error(f"Unknown team id {team_id} trying to pull team results")
raise HTTPException(404, f"Team id {team_id} not found")
if season is not None:
all_results = all_results.where(Result.season == season)
@ -224,31 +309,38 @@ async def get_team_results(
if csv:
data_list = [
['team_id', 'ranked_wins', 'ranked_losses', 'casual_wins', 'casual_losses', 'team_ranking'],
[team_id, r_wins, r_loss, c_wins, c_loss, this_team.ranking]
[
"team_id",
"ranked_wins",
"ranked_losses",
"casual_wins",
"casual_losses",
"team_ranking",
],
[team_id, r_wins, r_loss, c_wins, c_loss, this_team.ranking],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {
'team': model_to_dict(this_team),
'ranked_wins': r_wins,
'ranked_losses': r_loss,
'casual_wins': c_wins,
'casual_losses': c_loss,
"team": model_to_dict(this_team),
"ranked_wins": r_wins,
"ranked_losses": r_loss,
"casual_wins": c_wins,
"casual_losses": c_loss,
}
return return_val
@router.post('')
@router.post("")
async def post_result(result: ResultModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post results. This event has been logged.'
detail="You are not authorized to post results. This event has been logged.",
)
this_result = Result(**result.__dict__)
@ -256,24 +348,28 @@ async def post_result(result: ResultModel, token: str = Depends(oauth2_scheme)):
if result.ranked:
if not result.away_team_ranking:
error = f'Ranked game did not include away team ({result.away_team_id}) ranking.'
error = f"Ranked game did not include away team ({result.away_team_id}) ranking."
logging.error(error)
raise DataError(error)
if not result.home_team_ranking:
error = f'Ranked game did not include home team ({result.home_team_id}) ranking.'
error = f"Ranked game did not include home team ({result.home_team_id}) ranking."
logging.error(error)
raise DataError(error)
k_value = 20 if result.short_game else 60
ratio = (result.home_team_ranking - result.away_team_ranking) / 400
exp_score = 1 / (1 + (10 ** ratio))
exp_score = 1 / (1 + (10**ratio))
away_win = True if result.away_score > result.home_score else False
total_delta = k_value * exp_score
high_delta = total_delta * exp_score if exp_score > .5 else total_delta * (1 - exp_score)
high_delta = (
total_delta * exp_score
if exp_score > 0.5
else total_delta * (1 - exp_score)
)
low_delta = total_delta - high_delta
# exp_score > .5 means away team is favorite
if exp_score > .5 and away_win:
if exp_score > 0.5 and away_win:
final_delta = low_delta
away_delta = low_delta * 3
home_delta = -low_delta
@ -281,7 +377,7 @@ async def post_result(result: ResultModel, token: str = Depends(oauth2_scheme)):
final_delta = high_delta
away_delta = high_delta * 3
home_delta = -high_delta
elif exp_score <= .5 and not away_win:
elif exp_score <= 0.5 and not away_win:
final_delta = low_delta
away_delta = -low_delta
home_delta = low_delta * 3
@ -294,18 +390,20 @@ async def post_result(result: ResultModel, token: str = Depends(oauth2_scheme)):
away_delta = 0
home_delta = 0
logging.debug(f'/results ranking deltas\n\nk_value: {k_value} / ratio: {ratio} / '
f'exp_score: {exp_score} / away_win: {away_win} / total_delta: {total_delta} / '
f'high_delta: {high_delta} / low_delta: {low_delta} / final_delta: {final_delta} / ')
logging.debug(
f"/results ranking deltas\n\nk_value: {k_value} / ratio: {ratio} / "
f"exp_score: {exp_score} / away_win: {away_win} / total_delta: {total_delta} / "
f"high_delta: {high_delta} / low_delta: {low_delta} / final_delta: {final_delta} / "
)
away_team = Team.get_by_id(result.away_team_id)
away_team.ranking += away_delta
away_team.save()
logging.info(f'Just updated {away_team.abbrev} ranking to {away_team.ranking}')
logging.info(f"Just updated {away_team.abbrev} ranking to {away_team.ranking}")
home_team = Team.get_by_id(result.home_team_id)
home_team.ranking += home_delta
home_team.save()
logging.info(f'Just updated {home_team.abbrev} ranking to {home_team.ranking}')
logging.info(f"Just updated {home_team.abbrev} ranking to {home_team.ranking}")
if saved == 1:
return_val = model_to_dict(this_result)
@ -313,27 +411,38 @@ async def post_result(result: ResultModel, token: str = Depends(oauth2_scheme)):
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that roster'
detail="Well slap my ass and call me a teapot; I could not save that roster",
)
@router.patch('/{result_id}')
@router.patch("/{result_id}")
async def patch_result(
result_id, away_team_id: Optional[int] = None, home_team_id: Optional[int] = None,
away_score: Optional[int] = None, home_score: Optional[int] = None, away_team_value: Optional[int] = None,
home_team_value: Optional[int] = None, scorecard: Optional[str] = None, week: Optional[int] = None,
season: Optional[int] = None, short_game: Optional[bool] = None, game_type: Optional[str] = None,
token: str = Depends(oauth2_scheme)):
result_id,
away_team_id: Optional[int] = None,
home_team_id: Optional[int] = None,
away_score: Optional[int] = None,
home_score: Optional[int] = None,
away_team_value: Optional[int] = None,
home_team_value: Optional[int] = None,
scorecard: Optional[str] = None,
week: Optional[int] = None,
season: Optional[int] = None,
short_game: Optional[bool] = None,
game_type: Optional[str] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to patch results. This event has been logged.'
detail="You are not authorized to patch results. This event has been logged.",
)
try:
this_result = Result.get_by_id(result_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No result found with id {result_id}')
raise HTTPException(
status_code=404, detail=f"No result found with id {result_id}"
)
if away_team_id is not None:
this_result.away_team_id = away_team_id
@ -377,27 +486,32 @@ async def patch_result(
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that event'
detail="Well slap my ass and call me a teapot; I could not save that event",
)
@router.delete('/{result_id}')
@router.delete("/{result_id}")
async def delete_result(result_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post results. This event has been logged.'
detail="You are not authorized to post results. This event has been logged.",
)
try:
this_result = Result.get_by_id(result_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No result found with id {result_id}')
raise HTTPException(
status_code=404, detail=f"No result found with id {result_id}"
)
count = this_result.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Result {result_id} has been deleted')
raise HTTPException(
status_code=200, detail=f"Result {result_id} has been deleted"
)
else:
raise HTTPException(status_code=500, detail=f'Result {result_id} was not deleted')
raise HTTPException(
status_code=500, detail=f"Result {result_id} was not deleted"
)

View File

@ -9,10 +9,7 @@ from ..db_engine import Reward, model_to_dict, fn, DoesNotExist
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/rewards',
tags=['rewards']
)
router = APIRouter(prefix="/api/v2/rewards", tags=["rewards"])
class RewardModel(pydantic.BaseModel):
@ -20,19 +17,23 @@ class RewardModel(pydantic.BaseModel):
season: int
week: int
team_id: int
created: Optional[int] = int(datetime.timestamp(datetime.now())*1000)
created: Optional[int] = int(datetime.timestamp(datetime.now()) * 1000)
@router.get('')
@router.get("")
async def get_rewards(
name: Optional[str] = None, in_name: Optional[str] = None, team_id: Optional[int] = None,
season: Optional[int] = None, week: Optional[int] = None, created_after: Optional[int] = None,
flat: Optional[bool] = False, csv: Optional[bool] = None):
name: Optional[str] = None,
in_name: Optional[str] = None,
team_id: Optional[int] = None,
season: Optional[int] = None,
week: Optional[int] = None,
created_after: Optional[int] = None,
flat: Optional[bool] = False,
csv: Optional[bool] = None,
limit: Optional[int] = 100,
):
all_rewards = Reward.select().order_by(Reward.id)
if all_rewards.count() == 0:
raise HTTPException(status_code=404, detail=f'There are no rewards to filter')
if name is not None:
all_rewards = all_rewards.where(fn.Lower(Reward.name) == name.lower())
if team_id is not None:
@ -48,63 +49,73 @@ async def get_rewards(
if week is not None:
all_rewards = all_rewards.where(Reward.week == week)
if all_rewards.count() == 0:
raise HTTPException(status_code=404, detail=f'No rewards found')
total_count = all_rewards.count()
if total_count == 0:
raise HTTPException(status_code=404, detail="No rewards found")
limit = max(0, min(limit, 500))
all_rewards = all_rewards.limit(limit)
if csv:
data_list = [['id', 'name', 'team', 'daily', 'created']]
data_list = [["id", "name", "team", "daily", "created"]]
for line in all_rewards:
data_list.append(
[
line.id, line.name, line.team.id, line.daily, line.created
]
[line.id, line.name, line.team.id, line.daily, line.created]
)
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = {'count': all_rewards.count(), 'rewards': []}
return_val = {"count": total_count, "rewards": []}
for x in all_rewards:
return_val['rewards'].append(model_to_dict(x, recurse=not flat))
return_val["rewards"].append(model_to_dict(x, recurse=not flat))
return return_val
@router.get('/{reward_id}')
@router.get("/{reward_id}")
async def get_one_reward(reward_id, csv: Optional[bool] = False):
try:
this_reward = Reward.get_by_id(reward_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No reward found with id {reward_id}')
raise HTTPException(
status_code=404, detail=f"No reward found with id {reward_id}"
)
if csv:
data_list = [
['id', 'name', 'card_count', 'description'],
[this_reward.id, this_reward.name, this_reward.team.id, this_reward.daily, this_reward.created]
["id", "name", "card_count", "description"],
[
this_reward.id,
this_reward.name,
this_reward.team.id,
this_reward.daily,
this_reward.created,
],
]
return_val = DataFrame(data_list).to_csv(header=False, index=False)
return Response(content=return_val, media_type='text/csv')
return Response(content=return_val, media_type="text/csv")
else:
return_val = model_to_dict(this_reward)
return return_val
@router.post('')
@router.post("")
async def post_rewards(reward: RewardModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to post rewards. This event has been logged.'
detail="You are not authorized to post rewards. This event has been logged.",
)
reward_data = reward.dict()
# Convert milliseconds timestamp to datetime for PostgreSQL
if reward_data.get('created'):
reward_data['created'] = datetime.fromtimestamp(reward_data['created'] / 1000)
if reward_data.get("created"):
reward_data["created"] = datetime.fromtimestamp(reward_data["created"] / 1000)
this_reward = Reward(**reward_data)
saved = this_reward.save()
@ -114,24 +125,30 @@ async def post_rewards(reward: RewardModel, token: str = Depends(oauth2_scheme))
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that cardset'
detail="Well slap my ass and call me a teapot; I could not save that cardset",
)
@router.patch('/{reward_id}')
@router.patch("/{reward_id}")
async def patch_reward(
reward_id, name: Optional[str] = None, team_id: Optional[int] = None, created: Optional[int] = None,
token: str = Depends(oauth2_scheme)):
reward_id,
name: Optional[str] = None,
team_id: Optional[int] = None,
created: Optional[int] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to patch rewards. This event has been logged.'
detail="You are not authorized to patch rewards. This event has been logged.",
)
try:
this_reward = Reward.get_by_id(reward_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No reward found with id {reward_id}')
raise HTTPException(
status_code=404, detail=f"No reward found with id {reward_id}"
)
if name is not None:
this_reward.name = name
@ -147,28 +164,32 @@ async def patch_reward(
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that rarity'
detail="Well slap my ass and call me a teapot; I could not save that rarity",
)
@router.delete('/{reward_id}')
@router.delete("/{reward_id}")
async def delete_reward(reward_id, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('Bad Token: [REDACTED]')
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(
status_code=401,
detail='You are not authorized to delete rewards. This event has been logged.'
detail="You are not authorized to delete rewards. This event has been logged.",
)
try:
this_reward = Reward.get_by_id(reward_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail=f'No reward found with id {reward_id}')
raise HTTPException(
status_code=404, detail=f"No reward found with id {reward_id}"
)
count = this_reward.delete_instance()
if count == 1:
raise HTTPException(status_code=200, detail=f'Reward {reward_id} has been deleted')
raise HTTPException(
status_code=200, detail=f"Reward {reward_id} has been deleted"
)
else:
raise HTTPException(status_code=500, detail=f'Reward {reward_id} was not deleted')
raise HTTPException(
status_code=500, detail=f"Reward {reward_id} was not deleted"
)

View File

@ -4,7 +4,7 @@ from typing import Optional
import logging
import pydantic
from ..db_engine import ScoutClaim, ScoutOpportunity, model_to_dict
from ..db_engine import ScoutClaim, model_to_dict
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(prefix="/api/v2/scout_claims", tags=["scout_claims"])
@ -18,7 +18,9 @@ class ScoutClaimModel(pydantic.BaseModel):
@router.get("")
async def get_scout_claims(
scout_opportunity_id: Optional[int] = None, claimed_by_team_id: Optional[int] = None
scout_opportunity_id: Optional[int] = None,
claimed_by_team_id: Optional[int] = None,
limit: Optional[int] = 100,
):
query = ScoutClaim.select().order_by(ScoutClaim.id)
@ -28,8 +30,14 @@ async def get_scout_claims(
if claimed_by_team_id is not None:
query = query.where(ScoutClaim.claimed_by_team_id == claimed_by_team_id)
total_count = query.count()
if limit is not None:
limit = max(0, min(limit, 500))
query = query.limit(limit)
results = [model_to_dict(x, recurse=False) for x in query]
return {"count": len(results), "results": results}
return {"count": total_count, "results": results}
@router.get("/{claim_id}")

View File

@ -5,7 +5,7 @@ from typing import Optional, List
import logging
import pydantic
from ..db_engine import ScoutOpportunity, ScoutClaim, model_to_dict, fn
from ..db_engine import ScoutOpportunity, ScoutClaim, model_to_dict
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(prefix="/api/v2/scout_opportunities", tags=["scout_opportunities"])
@ -32,8 +32,10 @@ async def get_scout_opportunities(
claimed: Optional[bool] = None,
expired_before: Optional[int] = None,
opener_team_id: Optional[int] = None,
limit: Optional[int] = 100,
):
limit = max(0, min(limit, 500))
query = ScoutOpportunity.select().order_by(ScoutOpportunity.id)
if opener_team_id is not None:
@ -50,8 +52,10 @@ async def get_scout_opportunities(
else:
query = query.where(ScoutOpportunity.id.not_in(claim_subquery))
total_count = query.count()
query = query.limit(limit)
results = [opportunity_to_dict(x, recurse=False) for x in query]
return {"count": len(results), "results": results}
return {"count": total_count, "results": results}
@router.get("/{opportunity_id}")

View File

@ -8,10 +8,7 @@ from ..db_engine import StratGame, model_to_dict, fn
from ..dependencies import oauth2_scheme, valid_token
router = APIRouter(
prefix='/api/v2/games',
tags=['games']
)
router = APIRouter(prefix="/api/v2/games", tags=["games"])
class GameModel(pydantic.BaseModel):
@ -35,13 +32,22 @@ class GameList(pydantic.BaseModel):
games: List[GameModel]
@router.get('')
@router.get("")
async def get_games(
season: list = Query(default=None), forfeit: Optional[bool] = None, away_team_id: list = Query(default=None),
home_team_id: list = Query(default=None), team1_id: list = Query(default=None),
team2_id: list = Query(default=None), game_type: list = Query(default=None), ranked: Optional[bool] = None,
short_game: Optional[bool] = None, csv: Optional[bool] = False, short_output: bool = False,
gauntlet_id: Optional[int] = None):
season: list = Query(default=None),
forfeit: Optional[bool] = None,
away_team_id: list = Query(default=None),
home_team_id: list = Query(default=None),
team1_id: list = Query(default=None),
team2_id: list = Query(default=None),
game_type: list = Query(default=None),
ranked: Optional[bool] = None,
short_game: Optional[bool] = None,
csv: Optional[bool] = False,
short_output: bool = False,
gauntlet_id: Optional[int] = None,
limit: int = 100,
):
all_games = StratGame.select().order_by(StratGame.id)
if season is not None:
@ -68,49 +74,71 @@ async def get_games(
if short_game is not None:
all_games = all_games.where(StratGame.short_game == short_game)
if gauntlet_id is not None:
all_games = all_games.where(StratGame.game_type.contains(f'gauntlet-{gauntlet_id}'))
all_games = all_games.where(
StratGame.game_type.contains(f"gauntlet-{gauntlet_id}")
)
total_count = all_games.count() if not csv else 0
all_games = all_games.limit(max(0, min(limit, 500)))
if csv:
return_vals = [model_to_dict(x) for x in all_games]
for x in return_vals:
x['away_abbrev'] = x['away_team']['abbrev']
x['home_abbrev'] = x['home_team']['abbrev']
del x['away_team'], x['home_team']
x["away_abbrev"] = x["away_team"]["abbrev"]
x["home_abbrev"] = x["home_team"]["abbrev"]
del x["away_team"], x["home_team"]
output = pd.DataFrame(return_vals)[[
'id', 'away_abbrev', 'home_abbrev', 'away_score', 'home_score', 'away_team_value', 'home_team_value',
'game_type', 'season', 'week', 'short_game', 'ranked'
]]
output = pd.DataFrame(return_vals)[
[
"id",
"away_abbrev",
"home_abbrev",
"away_score",
"home_score",
"away_team_value",
"home_team_value",
"game_type",
"season",
"week",
"short_game",
"ranked",
]
]
return Response(content=output.to_csv(index=False), media_type='text/csv')
return Response(content=output.to_csv(index=False), media_type="text/csv")
return_val = {'count': all_games.count(), 'games': [
model_to_dict(x, recurse=not short_output) for x in all_games
]}
return_val = {
"count": total_count,
"games": [model_to_dict(x, recurse=not short_output) for x in all_games],
}
return return_val
@router.get('/{game_id}')
@router.get("/{game_id}")
async def get_one_game(game_id: int):
this_game = StratGame.get_or_none(StratGame.id == game_id)
if not this_game:
raise HTTPException(status_code=404, detail=f'StratGame ID {game_id} not found')
raise HTTPException(status_code=404, detail=f"StratGame ID {game_id} not found")
g_result = model_to_dict(this_game)
return g_result
@router.patch('/{game_id}')
@router.patch("/{game_id}")
async def patch_game(
game_id: int, game_type: Optional[str] = None, away_score: Optional[int] = None,
home_score: Optional[int] = None, token: str = Depends(oauth2_scheme)):
game_id: int,
game_type: Optional[str] = None,
away_score: Optional[int] = None,
home_score: Optional[int] = None,
token: str = Depends(oauth2_scheme),
):
if not valid_token(token):
logging.warning('patch_game - Bad Token: [REDACTED]')
raise HTTPException(status_code=401, detail='Unauthorized')
logging.warning("patch_game - Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
this_game = StratGame.get_or_none(StratGame.id == game_id)
if not this_game:
raise HTTPException(status_code=404, detail=f'StratGame ID {game_id} not found')
raise HTTPException(status_code=404, detail=f"StratGame ID {game_id} not found")
if away_score is not None:
this_game.away_score = away_score
@ -123,14 +151,14 @@ async def patch_game(
g_result = model_to_dict(this_game)
return g_result
else:
raise HTTPException(status_code=500, detail=f'Unable to patch game {game_id}')
raise HTTPException(status_code=500, detail=f"Unable to patch game {game_id}")
@router.post('')
@router.post("")
async def post_game(this_game: GameModel, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('post_games - Bad Token: [REDACTED]')
raise HTTPException(status_code=401, detail='Unauthorized')
logging.warning("post_games - Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
this_game = StratGame(**this_game.dict())
@ -141,25 +169,25 @@ async def post_game(this_game: GameModel, token: str = Depends(oauth2_scheme)):
else:
raise HTTPException(
status_code=418,
detail='Well slap my ass and call me a teapot; I could not save that game'
detail="Well slap my ass and call me a teapot; I could not save that game",
)
@router.delete('/{game_id}')
@router.delete("/{game_id}")
async def delete_game(game_id: int, token: str = Depends(oauth2_scheme)):
if not valid_token(token):
logging.warning('delete_game - Bad Token: [REDACTED]')
raise HTTPException(status_code=401, detail='Unauthorized')
logging.warning("delete_game - Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
this_game = StratGame.get_or_none(StratGame.id == game_id)
if not this_game:
raise HTTPException(status_code=404, detail=f'StratGame ID {game_id} not found')
raise HTTPException(status_code=404, detail=f"StratGame ID {game_id} not found")
count = this_game.delete_instance()
if count == 1:
return f'StratGame {game_id} has been deleted'
return f"StratGame {game_id} has been deleted"
else:
raise HTTPException(status_code=500, detail=f'StratGame {game_id} could not be deleted')
raise HTTPException(
status_code=500, detail=f"StratGame {game_id} could not be deleted"
)

View File

@ -1049,7 +1049,6 @@ async def team_buy_players(team_id: int, ids: str, ts: str):
detail=f"You are not authorized to buy {this_team.abbrev} cards. This event has been logged.",
)
all_ids = ids.split(",")
conf_message = ""
total_cost = 0
@ -1531,8 +1530,8 @@ async def delete_team(team_id, token: str = Depends(oauth2_scheme)):
raise HTTPException(status_code=500, detail=f"Team {team_id} was not deleted")
@router.get("/{team_id}/evolutions")
async def list_team_evolutions(
@router.get("/{team_id}/refractors")
async def list_team_refractors(
team_id: int,
card_type: Optional[str] = Query(default=None),
tier: Optional[int] = Query(default=None),
@ -1540,9 +1539,9 @@ async def list_team_evolutions(
per_page: int = Query(default=10, ge=1, le=100),
token: str = Depends(oauth2_scheme),
):
"""List all EvolutionCardState rows for a team, with optional filters.
"""List all RefractorCardState rows for a team, with optional filters.
Joins EvolutionCardState to EvolutionTrack so that card_type filtering
Joins RefractorCardState to RefractorTrack so that card_type filtering
works without a second query. Results are paginated via page/per_page
(1-indexed pages); items are ordered by player_id for stable ordering.
@ -1555,27 +1554,27 @@ async def list_team_evolutions(
Response shape:
{"count": N, "items": [card_state_with_threshold_context, ...]}
Each item in 'items' has the same shape as GET /evolution/cards/{card_id}.
Each item in 'items' has the same shape as GET /refractor/cards/{card_id}.
"""
if not valid_token(token):
logging.warning("Bad Token: [REDACTED]")
raise HTTPException(status_code=401, detail="Unauthorized")
from ..db_engine import EvolutionCardState, EvolutionTrack
from ..routers_v2.evolution import _build_card_state_response
from ..db_engine import RefractorCardState, RefractorTrack
from ..routers_v2.refractor import _build_card_state_response
query = (
EvolutionCardState.select(EvolutionCardState, EvolutionTrack)
.join(EvolutionTrack)
.where(EvolutionCardState.team == team_id)
.order_by(EvolutionCardState.player_id)
RefractorCardState.select(RefractorCardState, RefractorTrack)
.join(RefractorTrack)
.where(RefractorCardState.team == team_id)
.order_by(RefractorCardState.player_id)
)
if card_type is not None:
query = query.where(EvolutionTrack.card_type == card_type)
query = query.where(RefractorTrack.card_type == card_type)
if tier is not None:
query = query.where(EvolutionCardState.current_tier == tier)
query = query.where(RefractorCardState.current_tier == tier)
total = query.count()
offset = (page - 1) * per_page

View File

@ -1,36 +1,36 @@
"""Seed script for EvolutionTrack records.
"""Seed script for RefractorTrack records.
Loads track definitions from evolution_tracks.json and upserts them into the
Loads track definitions from refractor_tracks.json and upserts them into the
database using get_or_create keyed on name. Existing tracks have their
thresholds and formula updated to match the JSON in case values have changed.
Can be run standalone:
python -m app.seed.evolution_tracks
python -m app.seed.refractor_tracks
"""
import json
import logging
from pathlib import Path
from app.db_engine import EvolutionTrack
from app.db_engine import RefractorTrack
logger = logging.getLogger(__name__)
_JSON_PATH = Path(__file__).parent / "evolution_tracks.json"
_JSON_PATH = Path(__file__).parent / "refractor_tracks.json"
def seed_evolution_tracks() -> list[EvolutionTrack]:
"""Upsert evolution tracks from JSON seed data.
def seed_refractor_tracks() -> list[RefractorTrack]:
"""Upsert refractor tracks from JSON seed data.
Returns a list of EvolutionTrack instances that were created or updated.
Returns a list of RefractorTrack instances that were created or updated.
"""
raw = _JSON_PATH.read_text(encoding="utf-8")
track_defs = json.loads(raw)
results: list[EvolutionTrack] = []
results: list[RefractorTrack] = []
for defn in track_defs:
track, created = EvolutionTrack.get_or_create(
track, created = RefractorTrack.get_or_create(
name=defn["name"],
defaults={
"card_type": defn["card_type"],
@ -61,6 +61,6 @@ def seed_evolution_tracks() -> list[EvolutionTrack]:
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
logger.info("Seeding evolution tracks...")
tracks = seed_evolution_tracks()
logger.info("Seeding refractor tracks...")
tracks = seed_refractor_tracks()
logger.info("Done. %d track(s) processed.", len(tracks))

View File

@ -1,6 +1,6 @@
"""Formula engine for evolution value computation (WP-09).
"""Formula engine for refractor value computation (WP-09).
Three pure functions that compute a numeric evolution value from career stats,
Three pure functions that compute a numeric refractor value from career stats,
plus helpers for formula dispatch and tier classification.
Stats attributes expected by each formula:
@ -79,7 +79,7 @@ def compute_value_for_track(card_type: str, stats) -> float:
def tier_from_value(value: float, track) -> int:
"""Return the evolution tier (0-4) for a computed value against a track.
"""Return the refractor tier (0-4) for a computed value against a track.
Tier boundaries are inclusive on the lower end:
T0: value < t1

View File

@ -0,0 +1,698 @@
"""Refractor rating boost service (Phase 2).
Pure functions for computing boosted card ratings when a player
reaches a new Refractor tier. The module-level 'db' variable is used by
apply_tier_boost() for atomic writes; tests patch this reference to redirect
writes to a shared-memory SQLite database.
Batter boost: fixed +0.5 to four offensive columns per tier.
Pitcher boost: 1.5 TB-budget priority algorithm per tier.
"""
from decimal import Decimal, ROUND_HALF_UP
import hashlib
import json
import logging
# Module-level db reference imported lazily so that this module can be
# imported before app.db_engine is fully initialised (e.g. in tests that
# patch DATABASE_TYPE before importing db_engine).
# Tests that need to redirect DB writes should patch this attribute at module
# level: `import app.services.refractor_boost as m; m.db = test_db`.
db = None
def _get_db():
"""Return the module-level db, importing lazily on first use."""
global db
if db is None:
from app.db_engine import db as _db # noqa: PLC0415
db = _db
return db
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Batter constants
# ---------------------------------------------------------------------------
BATTER_POSITIVE_DELTAS: dict[str, Decimal] = {
"homerun": Decimal("0.50"),
"double_pull": Decimal("0.50"),
"single_one": Decimal("0.50"),
"walk": Decimal("0.50"),
}
BATTER_NEGATIVE_DELTAS: dict[str, Decimal] = {
"strikeout": Decimal("-1.50"),
"groundout_a": Decimal("-0.50"),
}
# All 22 outcome columns that must sum to 108.
BATTER_OUTCOME_COLUMNS: list[str] = [
"homerun",
"bp_homerun",
"triple",
"double_three",
"double_two",
"double_pull",
"single_two",
"single_one",
"single_center",
"bp_single",
"hbp",
"walk",
"strikeout",
"lineout",
"popout",
"flyout_a",
"flyout_bq",
"flyout_lf_b",
"flyout_rf_b",
"groundout_a",
"groundout_b",
"groundout_c",
]
# ---------------------------------------------------------------------------
# Pitcher constants
# ---------------------------------------------------------------------------
# (column, tb_cost) pairs in priority order.
PITCHER_PRIORITY: list[tuple[str, int]] = [
("double_cf", 2),
("double_three", 2),
("double_two", 2),
("single_center", 1),
("single_two", 1),
("single_one", 1),
("bp_single", 1),
("walk", 1),
("homerun", 4),
("bp_homerun", 4),
("triple", 3),
("hbp", 1),
]
# All 18 variable outcome columns (sum to 79; x-checks add 29 for 108 total).
PITCHER_OUTCOME_COLUMNS: list[str] = [
"homerun",
"bp_homerun",
"triple",
"double_three",
"double_two",
"double_cf",
"single_two",
"single_one",
"single_center",
"bp_single",
"hbp",
"walk",
"strikeout",
"flyout_lf_b",
"flyout_cf_b",
"flyout_rf_b",
"groundout_a",
"groundout_b",
]
# Cross-check columns that are NEVER modified by the boost algorithm.
PITCHER_XCHECK_COLUMNS: list[str] = [
"xcheck_p",
"xcheck_c",
"xcheck_1b",
"xcheck_2b",
"xcheck_3b",
"xcheck_ss",
"xcheck_lf",
"xcheck_cf",
"xcheck_rf",
]
PITCHER_TB_BUDGET = Decimal("1.5")
# ---------------------------------------------------------------------------
# Batter boost
# ---------------------------------------------------------------------------
def apply_batter_boost(ratings_dict: dict) -> dict:
"""Apply one Refractor tier boost to a batter's outcome ratings.
Adds fixed positive deltas to four offensive columns (homerun, double_pull,
single_one, walk) while funding that increase by reducing strikeout and
groundout_a. A 0-floor is enforced on negative columns: if the full
reduction cannot be taken, positive deltas are scaled proportionally so that
the invariant (22 columns sum to 108.0) is always preserved.
Args:
ratings_dict: Dict containing at minimum all 22 BATTER_OUTCOME_COLUMNS
as numeric (int or float) values.
Returns:
New dict with the same keys as ratings_dict, with boosted outcome column
values as floats. All other keys are passed through unchanged.
Raises:
KeyError: If any BATTER_OUTCOME_COLUMNS key is missing from ratings_dict.
"""
result = dict(ratings_dict)
# Step 1 — convert the 22 outcome columns to Decimal for precise arithmetic.
ratings: dict[str, Decimal] = {
col: Decimal(str(result[col])) for col in BATTER_OUTCOME_COLUMNS
}
# Step 2 — apply negative deltas with 0-floor, tracking how much was
# actually removed versus how much was requested.
total_requested_reduction = Decimal("0")
total_actually_reduced = Decimal("0")
for col, delta in BATTER_NEGATIVE_DELTAS.items():
requested = abs(delta)
total_requested_reduction += requested
actual = min(requested, ratings[col])
ratings[col] -= actual
total_actually_reduced += actual
# Step 3 — check whether any truncation occurred.
total_truncated = total_requested_reduction - total_actually_reduced
# Step 4 — scale positive deltas if we couldn't take the full reduction.
if total_truncated > Decimal("0"):
# Positive additions must equal what was actually reduced so the
# 108-sum is preserved.
total_requested_addition = sum(BATTER_POSITIVE_DELTAS.values())
if total_requested_addition > Decimal("0"):
scale = total_actually_reduced / total_requested_addition
else:
scale = Decimal("0")
logger.warning(
"refractor_boost: batter truncation occurred — "
"requested_reduction=%.4f actually_reduced=%.4f scale=%.6f",
float(total_requested_reduction),
float(total_actually_reduced),
float(scale),
)
# Quantize the first N-1 deltas independently, then assign the last
# delta as the remainder so the total addition equals
# total_actually_reduced exactly (no quantize drift across 4 ops).
pos_cols = list(BATTER_POSITIVE_DELTAS.keys())
positive_deltas = {}
running_sum = Decimal("0")
for col in pos_cols[:-1]:
scaled = (BATTER_POSITIVE_DELTAS[col] * scale).quantize(
Decimal("0.000001"), rounding=ROUND_HALF_UP
)
positive_deltas[col] = scaled
running_sum += scaled
last_delta = total_actually_reduced - running_sum
positive_deltas[pos_cols[-1]] = max(last_delta, Decimal("0"))
else:
positive_deltas = BATTER_POSITIVE_DELTAS
# Step 5 — apply (possibly scaled) positive deltas.
for col, delta in positive_deltas.items():
ratings[col] += delta
# Write boosted values back as floats.
for col in BATTER_OUTCOME_COLUMNS:
result[col] = float(ratings[col])
return result
# ---------------------------------------------------------------------------
# Pitcher boost
# ---------------------------------------------------------------------------
def apply_pitcher_boost(ratings_dict: dict, tb_budget: float = 1.5) -> dict:
"""Apply one Refractor tier boost to a pitcher's outcome ratings.
Iterates through PITCHER_PRIORITY in order, converting as many outcome
chances as the TB budget allows into strikeouts. The TB cost per chance
varies by outcome type (e.g. a double costs 2 TB budget units, a single
costs 1). The strikeout column absorbs all converted chances.
X-check columns (xcheck_p through xcheck_rf) are never touched.
Args:
ratings_dict: Dict containing at minimum all 18 PITCHER_OUTCOME_COLUMNS
as numeric (int or float) values.
tb_budget: Total base budget available for this boost tier. Defaults
to 1.5 (PITCHER_TB_BUDGET).
Returns:
New dict with the same keys as ratings_dict, with boosted outcome column
values as floats. All other keys are passed through unchanged.
Raises:
KeyError: If any PITCHER_OUTCOME_COLUMNS key is missing from ratings_dict.
"""
result = dict(ratings_dict)
# Step 1 — convert outcome columns to Decimal, set remaining budget.
ratings: dict[str, Decimal] = {
col: Decimal(str(result[col])) for col in PITCHER_OUTCOME_COLUMNS
}
remaining = Decimal(str(tb_budget))
# Step 2 — iterate priority list, draining budget.
for col, tb_cost in PITCHER_PRIORITY:
if ratings[col] <= Decimal("0"):
continue
tb_cost_d = Decimal(str(tb_cost))
max_chances = remaining / tb_cost_d
chances_to_take = min(ratings[col], max_chances)
ratings[col] -= chances_to_take
ratings["strikeout"] += chances_to_take
remaining -= chances_to_take * tb_cost_d
if remaining <= Decimal("0"):
break
# Step 3 — warn if budget was not fully spent (rare, indicates all priority
# columns were already at zero).
if remaining > Decimal("0"):
logger.warning(
"refractor_boost: pitcher TB budget not fully spent — "
"remaining=%.4f of tb_budget=%.4f",
float(remaining),
tb_budget,
)
# Write boosted values back as floats.
for col in PITCHER_OUTCOME_COLUMNS:
result[col] = float(ratings[col])
return result
# ---------------------------------------------------------------------------
# Variant hash
# ---------------------------------------------------------------------------
def compute_variant_hash(
player_id: int,
refractor_tier: int,
cosmetics: list[str] | None = None,
) -> int:
"""Compute a stable, deterministic variant identifier for a boosted card.
Hashes the combination of player_id, refractor_tier, and an optional sorted
list of cosmetic identifiers to produce a compact integer suitable for use
as a database variant key. The result is derived from the first 8 hex
characters of a SHA-256 digest, so collisions are extremely unlikely in
practice.
variant=0 is reserved and will never be returned; any hash that resolves to
0 is remapped to 1.
Args:
player_id: Player primary key.
refractor_tier: Refractor tier (04) the card has reached.
cosmetics: Optional list of cosmetic tag strings (e.g. special art
identifiers). Order is normalised callers need not sort.
Returns:
A positive integer in the range [1, 2^32 - 1].
"""
inputs = {
"player_id": player_id,
"refractor_tier": refractor_tier,
"cosmetics": sorted(cosmetics or []),
}
raw = hashlib.sha256(json.dumps(inputs, sort_keys=True).encode()).hexdigest()
result = int(raw[:8], 16)
return result if result != 0 else 1 # variant=0 is reserved
# ---------------------------------------------------------------------------
# Display stat helpers
# ---------------------------------------------------------------------------
def compute_batter_display_stats(ratings: dict) -> dict:
"""Compute avg/obp/slg from batter outcome columns.
Uses the same formulas as the BattingCardRatingsModel Pydantic validator
so that variant card display stats are always consistent with the boosted
chance values. All denominators are 108 (the full card chance total).
Args:
ratings: Dict containing at minimum all BATTER_OUTCOME_COLUMNS as
numeric (int or float) values.
Returns:
Dict with keys 'avg', 'obp', 'slg' as floats.
"""
avg = (
ratings["homerun"]
+ ratings["bp_homerun"] / 2
+ ratings["triple"]
+ ratings["double_three"]
+ ratings["double_two"]
+ ratings["double_pull"]
+ ratings["single_two"]
+ ratings["single_one"]
+ ratings["single_center"]
+ ratings["bp_single"] / 2
) / 108
obp = (ratings["hbp"] + ratings["walk"]) / 108 + avg
slg = (
ratings["homerun"] * 4
+ ratings["bp_homerun"] * 2
+ ratings["triple"] * 3
+ ratings["double_three"] * 2
+ ratings["double_two"] * 2
+ ratings["double_pull"] * 2
+ ratings["single_two"]
+ ratings["single_one"]
+ ratings["single_center"]
+ ratings["bp_single"] / 2
) / 108
return {"avg": avg, "obp": obp, "slg": slg}
def compute_pitcher_display_stats(ratings: dict) -> dict:
"""Compute avg/obp/slg from pitcher outcome columns.
Uses the same formulas as the PitchingCardRatingsModel Pydantic validator
so that variant card display stats are always consistent with the boosted
chance values. All denominators are 108 (the full card chance total).
Args:
ratings: Dict containing at minimum all PITCHER_OUTCOME_COLUMNS as
numeric (int or float) values.
Returns:
Dict with keys 'avg', 'obp', 'slg' as floats.
"""
avg = (
ratings["homerun"]
+ ratings["bp_homerun"] / 2
+ ratings["triple"]
+ ratings["double_three"]
+ ratings["double_two"]
+ ratings["double_cf"]
+ ratings["single_two"]
+ ratings["single_one"]
+ ratings["single_center"]
+ ratings["bp_single"] / 2
) / 108
obp = (ratings["hbp"] + ratings["walk"]) / 108 + avg
slg = (
ratings["homerun"] * 4
+ ratings["bp_homerun"] * 2
+ ratings["triple"] * 3
+ ratings["double_three"] * 2
+ ratings["double_two"] * 2
+ ratings["double_cf"] * 2
+ ratings["single_two"]
+ ratings["single_one"]
+ ratings["single_center"]
+ ratings["bp_single"] / 2
) / 108
return {"avg": avg, "obp": obp, "slg": slg}
# ---------------------------------------------------------------------------
# Orchestration: apply_tier_boost
# ---------------------------------------------------------------------------
def apply_tier_boost(
player_id: int,
team_id: int,
new_tier: int,
card_type: str,
_batting_card_model=None,
_batting_ratings_model=None,
_pitching_card_model=None,
_pitching_ratings_model=None,
_card_model=None,
_state_model=None,
_audit_model=None,
) -> dict:
"""Create a boosted variant card for a tier-up.
IMPORTANT: This function is the SOLE writer of current_tier on
RefractorCardState when a tier-up occurs. The evaluator computes
the new tier but does NOT write it this function writes tier +
variant + audit atomically inside a single db.atomic() block.
If this function fails, the tier stays at its old value and will
be retried on the next game evaluation.
Orchestrates the full flow (card creation outside atomic; state
mutations inside db.atomic()):
1. Determine source variant (variant=0 for T1, previous tier's hash for T2+)
2. Fetch source card and ratings rows
3. Apply boost formula (batter or pitcher) per vs_hand split
4. Assert 108-sum after boost for both batters and pitchers
5. Compute new variant hash
6. Create new card row with new variant (idempotency: skip if exists)
7. Create new ratings rows for both vs_hand splits (idempotency: skip if exists)
8. Inside db.atomic():
a. Write RefractorBoostAudit record
b. Update RefractorCardState: current_tier, variant, fully_evolved
c. Propagate variant to all Card rows for (player_id, team_id)
Args:
player_id: Player primary key.
team_id: Team primary key.
new_tier: The tier being reached (1-4).
card_type: One of 'batter', 'sp', 'rp'.
_batting_card_model: Injectable stub for BattingCard (used in tests).
_batting_ratings_model: Injectable stub for BattingCardRatings.
_pitching_card_model: Injectable stub for PitchingCard.
_pitching_ratings_model: Injectable stub for PitchingCardRatings.
_card_model: Injectable stub for Card.
_state_model: Injectable stub for RefractorCardState.
_audit_model: Injectable stub for RefractorBoostAudit.
Returns:
Dict with 'variant_created' (int) and 'boost_deltas' (per-split dict).
Raises:
ValueError: If the source card or ratings are missing, or if
RefractorCardState is not found for (player_id, team_id).
"""
# Lazy model imports — same pattern as refractor_evaluator.py.
if _batting_card_model is None:
from app.db_engine import BattingCard as _batting_card_model # noqa: PLC0415
if _batting_ratings_model is None:
from app.db_engine import BattingCardRatings as _batting_ratings_model # noqa: PLC0415
if _pitching_card_model is None:
from app.db_engine import PitchingCard as _pitching_card_model # noqa: PLC0415
if _pitching_ratings_model is None:
from app.db_engine import PitchingCardRatings as _pitching_ratings_model # noqa: PLC0415
if _card_model is None:
from app.db_engine import Card as _card_model # noqa: PLC0415
if _state_model is None:
from app.db_engine import RefractorCardState as _state_model # noqa: PLC0415
if _audit_model is None:
from app.db_engine import RefractorBoostAudit as _audit_model # noqa: PLC0415
_db = _get_db()
if card_type not in ("batter", "sp", "rp"):
raise ValueError(
f"Invalid card_type={card_type!r}; expected one of 'batter', 'sp', 'rp'"
)
is_batter = card_type == "batter"
CardModel = _batting_card_model if is_batter else _pitching_card_model
RatingsModel = _batting_ratings_model if is_batter else _pitching_ratings_model
fk_field = "battingcard" if is_batter else "pitchingcard"
# 1. Determine source variant.
if new_tier == 1:
source_variant = 0
else:
source_variant = compute_variant_hash(player_id, new_tier - 1)
# 2. Fetch source card and ratings rows.
source_card = CardModel.get_or_none(
(CardModel.player == player_id) & (CardModel.variant == source_variant)
)
if source_card is None:
raise ValueError(
f"No {'batting' if is_batter else 'pitching'}card for "
f"player={player_id} variant={source_variant}"
)
ratings_rows = list(
RatingsModel.select().where(getattr(RatingsModel, fk_field) == source_card.id)
)
if not ratings_rows:
raise ValueError(f"No ratings rows for card_id={source_card.id}")
# 3. Apply boost to each vs_hand split.
boost_fn = apply_batter_boost if is_batter else apply_pitcher_boost
outcome_cols = BATTER_OUTCOME_COLUMNS if is_batter else PITCHER_OUTCOME_COLUMNS
boosted_splits: dict[str, dict] = {}
for row in ratings_rows:
# Build the ratings dict: outcome columns + (pitcher) x-check columns.
ratings_dict: dict = {col: getattr(row, col) for col in outcome_cols}
if not is_batter:
for col in PITCHER_XCHECK_COLUMNS:
ratings_dict[col] = getattr(row, col)
boosted = boost_fn(ratings_dict)
# 4. Assert 108-sum invariant after boost (Peewee bypasses Pydantic validators).
if is_batter:
boosted_sum = sum(boosted[col] for col in BATTER_OUTCOME_COLUMNS)
else:
boosted_sum = sum(boosted[col] for col in PITCHER_OUTCOME_COLUMNS) + sum(
boosted[col] for col in PITCHER_XCHECK_COLUMNS
)
if abs(boosted_sum - 108.0) >= 0.01:
raise ValueError(
f"108-sum invariant violated after boost for player={player_id} "
f"vs_hand={row.vs_hand}: sum={boosted_sum:.6f}"
)
boosted_splits[row.vs_hand] = boosted
# 5. Compute new variant hash.
new_variant = compute_variant_hash(player_id, new_tier)
# 6. Create new card row (idempotency: skip if exists).
existing_card = CardModel.get_or_none(
(CardModel.player == player_id) & (CardModel.variant == new_variant)
)
if existing_card is not None:
new_card = existing_card
else:
if is_batter:
clone_fields = [
"steal_low",
"steal_high",
"steal_auto",
"steal_jump",
"bunting",
"hit_and_run",
"running",
"offense_col",
"hand",
]
else:
clone_fields = [
"balk",
"wild_pitch",
"hold",
"starter_rating",
"relief_rating",
"closer_rating",
"batting",
"offense_col",
"hand",
]
card_data: dict = {
"player": player_id,
"variant": new_variant,
"image_url": None, # No rendered image for variant cards yet.
}
for fname in clone_fields:
card_data[fname] = getattr(source_card, fname)
new_card = CardModel.create(**card_data)
# 7. Create new ratings rows for each split (idempotency: skip if exists).
display_stats_fn = (
compute_batter_display_stats if is_batter else compute_pitcher_display_stats
)
for vs_hand, boosted_ratings in boosted_splits.items():
existing_ratings = RatingsModel.get_or_none(
(getattr(RatingsModel, fk_field) == new_card.id)
& (RatingsModel.vs_hand == vs_hand)
)
if existing_ratings is not None:
continue # Idempotency: already written.
ratings_data: dict = {
fk_field: new_card.id,
"vs_hand": vs_hand,
}
# Outcome columns (boosted values).
ratings_data.update({col: boosted_ratings[col] for col in outcome_cols})
# X-check columns for pitchers (unchanged by boost, copy from boosted dict).
if not is_batter:
for col in PITCHER_XCHECK_COLUMNS:
ratings_data[col] = boosted_ratings[col]
# Direction rates for batters: copy from source row.
if is_batter:
source_row = next(r for r in ratings_rows if r.vs_hand == vs_hand)
for rate_col in ("pull_rate", "center_rate", "slap_rate"):
ratings_data[rate_col] = getattr(source_row, rate_col)
# Compute fresh display stats from boosted chance columns.
display_stats = display_stats_fn(boosted_ratings)
ratings_data.update(display_stats)
RatingsModel.create(**ratings_data)
# 8. Load card state — needed for atomic state mutations.
card_state = _state_model.get_or_none(
(_state_model.player == player_id) & (_state_model.team == team_id)
)
if card_state is None:
raise ValueError(
f"No refractor_card_state for player={player_id} team={team_id}"
)
# All state mutations in a single atomic block.
with _db.atomic():
# 8a. Write audit record.
# boost_delta_json stores per-split boosted values including x-check columns
# for pitchers so the full card can be reconstructed from the audit.
audit_data: dict = {
"card_state": card_state.id,
"tier": new_tier,
"variant_created": new_variant,
"boost_delta_json": json.dumps(boosted_splits, default=str),
}
if is_batter:
audit_data["battingcard"] = new_card.id
else:
audit_data["pitchingcard"] = new_card.id
existing_audit = _audit_model.get_or_none(
(_audit_model.card_state == card_state.id) & (_audit_model.tier == new_tier)
)
if existing_audit is None:
_audit_model.create(**audit_data)
# 8b. Update RefractorCardState — this is the SOLE tier write on tier-up.
card_state.current_tier = new_tier
card_state.fully_evolved = new_tier >= 4
card_state.variant = new_variant
card_state.save()
# 8c. Propagate variant to all Card rows for (player_id, team_id).
_card_model.update(variant=new_variant).where(
(_card_model.player == player_id) & (_card_model.team == team_id)
).execute()
logger.debug(
"refractor_boost: applied T%s boost for player=%s team=%s variant=%s",
new_tier,
player_id,
team_id,
new_variant,
)
return {
"variant_created": new_variant,
"boost_deltas": dict(boosted_splits),
}

View File

@ -1,6 +1,6 @@
"""Evolution evaluator service (WP-08).
"""Refractor evaluator service (WP-08).
Force-recalculates a card's evolution state from career totals.
Force-recalculates a card's refractor state from career totals.
evaluate_card() is the main entry point:
1. Load career totals: SUM all BattingSeasonStats/PitchingSeasonStats rows for (player_id, team_id)
@ -9,12 +9,23 @@ evaluate_card() is the main entry point:
4. Compare value to track thresholds to determine new_tier
5. Update card_state.current_value = computed value
6. Update card_state.current_tier = max(current_tier, new_tier) no regression
7. Update card_state.fully_evolved = (new_tier >= 4)
(SKIPPED when dry_run=True)
7. Update card_state.fully_evolved = (current_tier >= 4)
(SKIPPED when dry_run=True)
8. Update card_state.last_evaluated_at = NOW()
When dry_run=True, only steps 5 and 8 are written (current_value and
last_evaluated_at). Steps 67 (current_tier and fully_evolved) are intentionally
skipped so that the evaluate-game endpoint can detect a pending tier-up and
delegate the tier write to apply_tier_boost(), which writes tier + variant
atomically. The return dict always includes both "computed_tier" (what the
formula says the tier should be) and "computed_fully_evolved" (whether the
computed tier implies full evolution) so callers can make decisions without
reading the database again.
Idempotent: calling multiple times with the same data produces the same result.
Depends on WP-05 (EvolutionCardState), WP-07 (BattingSeasonStats/PitchingSeasonStats),
Depends on WP-05 (RefractorCardState), WP-07 (BattingSeasonStats/PitchingSeasonStats),
and WP-09 (formula engine). Models and formula functions are imported lazily so
this module can be imported before those PRs merge.
"""
@ -47,27 +58,39 @@ class _CareerTotals:
def evaluate_card(
player_id: int,
team_id: int,
dry_run: bool = False,
_stats_model=None,
_state_model=None,
_compute_value_fn=None,
_tier_from_value_fn=None,
) -> dict:
"""Force-recalculate a card's evolution tier from career stats.
"""Force-recalculate a card's refractor tier from career stats.
Sums all BattingSeasonStats or PitchingSeasonStats rows (based on
card_type) for (player_id, team_id) across all seasons, then delegates
formula computation and tier classification to the formula engine. The result is written back to evolution_card_state and
returned as a dict.
formula computation and tier classification to the formula engine. The
result is written back to refractor_card_state and returned as a dict.
current_tier never decreases (no regression):
card_state.current_tier = max(card_state.current_tier, new_tier)
When dry_run=True, only current_value and last_evaluated_at are written
current_tier and fully_evolved are NOT updated. This allows the caller
(evaluate-game endpoint) to detect a tier-up and delegate the tier write
to apply_tier_boost(), which writes tier + variant atomically. The return
dict always includes "computed_tier" (what the formula says the tier should
be) in addition to "current_tier" (what is actually stored in the DB).
Args:
player_id: Player primary key.
team_id: Team primary key.
dry_run: When True, skip writing current_tier and fully_evolved so
that apply_tier_boost() can write them atomically with variant
creation. Defaults to False (existing behaviour for the manual
/evaluate endpoint).
_stats_model: Override for BattingSeasonStats/PitchingSeasonStats
(used in tests to inject a stub model with all stat fields).
_state_model: Override for EvolutionCardState (used in tests to avoid
_state_model: Override for RefractorCardState (used in tests to avoid
importing from db_engine before WP-05 merges).
_compute_value_fn: Override for formula_engine.compute_value_for_track
(used in tests to avoid importing formula_engine before WP-09 merges).
@ -75,14 +98,16 @@ def evaluate_card(
(used in tests).
Returns:
Dict with updated current_tier, current_value, fully_evolved,
last_evaluated_at (ISO-8601 string).
Dict with current_tier, computed_tier, current_value, fully_evolved,
last_evaluated_at (ISO-8601 string). "computed_tier" reflects what
the formula computed; "current_tier" reflects what is stored in the DB
(which may differ when dry_run=True and a tier-up is pending).
Raises:
ValueError: If no evolution_card_state row exists for (player_id, team_id).
ValueError: If no refractor_card_state row exists for (player_id, team_id).
"""
if _state_model is None:
from app.db_engine import EvolutionCardState as _state_model # noqa: PLC0415
from app.db_engine import RefractorCardState as _state_model # noqa: PLC0415
if _compute_value_fn is None or _tier_from_value_fn is None:
from app.services.formula_engine import ( # noqa: PLC0415
@ -101,7 +126,7 @@ def evaluate_card(
)
if card_state is None:
raise ValueError(
f"No evolution_card_state for player_id={player_id} team_id={team_id}"
f"No refractor_card_state for player_id={player_id} team_id={team_id}"
)
# 2. Load career totals from the appropriate season stats table
@ -169,21 +194,30 @@ def evaluate_card(
value = _compute_value_fn(track.card_type, totals)
new_tier = _tier_from_value_fn(value, track)
# 58. Update card state (no tier regression)
now = datetime.utcnow()
# 58. Update card state.
now = datetime.now()
computed_tier = new_tier
computed_fully_evolved = computed_tier >= 4
# Always update value and timestamp; current_tier and fully_evolved are
# skipped when dry_run=True so that apply_tier_boost() can write them
# atomically with variant creation on tier-up.
card_state.current_value = value
card_state.current_tier = max(card_state.current_tier, new_tier)
card_state.fully_evolved = card_state.current_tier >= 4
card_state.last_evaluated_at = now
if not dry_run:
card_state.current_tier = max(card_state.current_tier, new_tier)
card_state.fully_evolved = card_state.current_tier >= 4
card_state.save()
logging.debug(
"evolution_eval: player=%s team=%s value=%.2f tier=%s fully_evolved=%s",
"refractor_eval: player=%s team=%s value=%.2f computed_tier=%s "
"stored_tier=%s dry_run=%s",
player_id,
team_id,
value,
computed_tier,
card_state.current_tier,
card_state.fully_evolved,
dry_run,
)
return {
@ -191,6 +225,8 @@ def evaluate_card(
"team_id": team_id,
"current_value": card_state.current_value,
"current_tier": card_state.current_tier,
"computed_tier": computed_tier,
"computed_fully_evolved": computed_fully_evolved,
"fully_evolved": card_state.fully_evolved,
"last_evaluated_at": card_state.last_evaluated_at.isoformat(),
}

View File

@ -1,10 +1,10 @@
"""
WP-10: Pack opening hook evolution_card_state initialization.
WP-10: Pack opening hook refractor_card_state initialization.
Public API
----------
initialize_card_evolution(player_id, team_id, card_type)
Get-or-create an EvolutionCardState for the (player_id, team_id) pair.
initialize_card_refractor(player_id, team_id, card_type)
Get-or-create a RefractorCardState for the (player_id, team_id) pair.
Returns the state instance on success, or None if initialization fails
(missing track, integrity error, etc.). Never raises.
@ -16,23 +16,23 @@ Design notes
------------
- The function is intentionally fire-and-forget from the caller's perspective.
All exceptions are caught and logged; pack opening is never blocked.
- No EvolutionProgress rows are created here. Progress accumulation is a
- No RefractorProgress rows are created here. Progress accumulation is a
separate concern handled by the stats-update pipeline (WP-07/WP-08).
- AI teams and Gauntlet teams skip Paperdex insertion (cards.py pattern);
we do NOT replicate that exclusion here all teams get an evolution state
we do NOT replicate that exclusion here all teams get a refractor state
so that future rule changes don't require back-filling.
"""
import logging
from typing import Optional
from app.db_engine import DoesNotExist, EvolutionCardState, EvolutionTrack
from app.db_engine import DoesNotExist, RefractorCardState, RefractorTrack
logger = logging.getLogger(__name__)
def _determine_card_type(player) -> str:
"""Map a player's primary position to an evolution card_type string.
"""Map a player's primary position to a refractor card_type string.
Rules (from WP-10 spec):
- pos_1 contains 'SP' -> 'sp'
@ -53,34 +53,34 @@ def _determine_card_type(player) -> str:
return "batter"
def initialize_card_evolution(
def initialize_card_refractor(
player_id: int,
team_id: int,
card_type: str,
) -> Optional[EvolutionCardState]:
"""Get-or-create an EvolutionCardState for a newly acquired card.
) -> Optional[RefractorCardState]:
"""Get-or-create a RefractorCardState for a newly acquired card.
Called by the cards POST endpoint after each card is inserted. The
function is idempotent: if a state row already exists for the
(player_id, team_id) pair it is returned unchanged existing
evolution progress is never reset.
refractor progress is never reset.
Args:
player_id: Primary key of the Player row (Player.player_id).
team_id: Primary key of the Team row (Team.id).
card_type: One of 'batter', 'sp', 'rp'. Determines which
EvolutionTrack is assigned to the new state.
RefractorTrack is assigned to the new state.
Returns:
The existing or newly created EvolutionCardState instance, or
The existing or newly created RefractorCardState instance, or
None if initialization could not complete (missing track seed
data, unexpected DB error, etc.).
"""
try:
track = EvolutionTrack.get(EvolutionTrack.card_type == card_type)
track = RefractorTrack.get(RefractorTrack.card_type == card_type)
except DoesNotExist:
logger.warning(
"evolution_init: no EvolutionTrack found for card_type=%r "
"refractor_init: no RefractorTrack found for card_type=%r "
"(player_id=%s, team_id=%s) — skipping state creation",
card_type,
player_id,
@ -89,7 +89,7 @@ def initialize_card_evolution(
return None
except Exception:
logger.exception(
"evolution_init: unexpected error fetching track "
"refractor_init: unexpected error fetching track "
"(card_type=%r, player_id=%s, team_id=%s)",
card_type,
player_id,
@ -98,7 +98,7 @@ def initialize_card_evolution(
return None
try:
state, created = EvolutionCardState.get_or_create(
state, created = RefractorCardState.get_or_create(
player_id=player_id,
team_id=team_id,
defaults={
@ -110,7 +110,7 @@ def initialize_card_evolution(
)
if created:
logger.debug(
"evolution_init: created EvolutionCardState id=%s "
"refractor_init: created RefractorCardState id=%s "
"(player_id=%s, team_id=%s, card_type=%r)",
state.id,
player_id,
@ -119,7 +119,7 @@ def initialize_card_evolution(
)
else:
logger.debug(
"evolution_init: state already exists id=%s "
"refractor_init: state already exists id=%s "
"(player_id=%s, team_id=%s) — no-op",
state.id,
player_id,
@ -129,7 +129,7 @@ def initialize_card_evolution(
except Exception:
logger.exception(
"evolution_init: failed to get_or_create state "
"refractor_init: failed to get_or_create state "
"(player_id=%s, team_id=%s, card_type=%r)",
player_id,
team_id,

View File

@ -76,7 +76,8 @@ def _get_player_pairs(game_id: int) -> tuple[set, set]:
for batter_id, batter_team_id, pitcher_id, pitcher_team_id in plays:
if batter_id is not None:
batting_pairs.add((batter_id, batter_team_id))
pitching_pairs.add((pitcher_id, pitcher_team_id))
if pitcher_id is not None:
pitching_pairs.add((pitcher_id, pitcher_team_id))
# Include pitchers who have a Decision but no StratPlay rows for this game
# (rare edge case, e.g. a pitcher credited with a decision without recording

View File

@ -0,0 +1,19 @@
-- Migration: Rename evolution tables to refractor tables
-- Date: 2026-03-23
--
-- Renames all four evolution system tables to the refractor naming convention.
-- This migration corresponds to the application-level rename from
-- EvolutionTrack/EvolutionCardState/EvolutionTierBoost/EvolutionCosmetic
-- to RefractorTrack/RefractorCardState/RefractorTierBoost/RefractorCosmetic.
--
-- The table renames are performed in order that respects foreign key
-- dependencies (referenced tables first, then referencing tables).
ALTER TABLE evolution_track RENAME TO refractor_track;
ALTER TABLE evolution_card_state RENAME TO refractor_card_state;
ALTER TABLE evolution_tier_boost RENAME TO refractor_tier_boost;
ALTER TABLE evolution_cosmetic RENAME TO refractor_cosmetic;
-- Rename indexes to match new table names
ALTER INDEX IF EXISTS evolution_card_state_player_team_uniq RENAME TO refractor_card_state_player_team_uniq;
ALTER INDEX IF EXISTS evolution_tier_boost_track_tier_type_target_uniq RENAME TO refractor_tier_boost_track_tier_type_target_uniq;

View File

@ -0,0 +1,19 @@
-- Migration: Add team_id index to refractor_card_state
-- Date: 2026-03-25
--
-- Adds a non-unique index on refractor_card_state.team_id to support the new
-- GET /api/v2/refractor/cards list endpoint, which filters by team as its
-- primary discriminator and is called on every /refractor status bot command.
--
-- The existing unique index is on (player_id, team_id) with player leading,
-- so team-only queries cannot use it efficiently.
BEGIN;
CREATE INDEX IF NOT EXISTS idx_refractor_card_state_team
ON refractor_card_state (team_id);
COMMIT;
-- Rollback:
-- DROP INDEX IF EXISTS idx_refractor_card_state_team;

View File

@ -0,0 +1,47 @@
-- Migration: Refractor Phase 2 — rating boost support
-- Date: 2026-03-28
-- Purpose: Extends the Refractor system to track and audit rating boosts
-- applied at each tier-up. Adds a variant column to
-- refractor_card_state (mirrors card.variant for promoted copies)
-- and creates the refractor_boost_audit table to record the
-- boost delta, source card, and variant assigned at each tier.
--
-- Tables affected:
-- refractor_card_state — new column: variant INTEGER
-- refractor_boost_audit — new table
--
-- Run on dev first, verify with:
-- SELECT column_name FROM information_schema.columns
-- WHERE table_name = 'refractor_card_state'
-- AND column_name = 'variant';
-- SELECT count(*) FROM refractor_boost_audit;
--
-- Rollback: See DROP/ALTER statements at bottom of file
BEGIN;
-- Verify card.variant column exists (should be from Phase 1 migration).
-- If not present, uncomment:
-- ALTER TABLE card ADD COLUMN IF NOT EXISTS variant INTEGER DEFAULT NULL;
-- New columns on refractor_card_state (additive, no data migration needed)
ALTER TABLE refractor_card_state ADD COLUMN IF NOT EXISTS variant INTEGER;
-- Boost audit table: records what was applied at each tier-up
CREATE TABLE IF NOT EXISTS refractor_boost_audit (
id SERIAL PRIMARY KEY,
card_state_id INTEGER NOT NULL REFERENCES refractor_card_state(id) ON DELETE CASCADE,
tier SMALLINT NOT NULL,
battingcard_id INTEGER REFERENCES battingcard(id),
pitchingcard_id INTEGER REFERENCES pitchingcard(id),
variant_created INTEGER NOT NULL,
boost_delta_json JSONB NOT NULL,
applied_at TIMESTAMP NOT NULL DEFAULT NOW(),
UNIQUE(card_state_id, tier) -- Prevent duplicate audit records on retry
);
COMMIT;
-- Rollback:
-- DROP TABLE IF EXISTS refractor_boost_audit;
-- ALTER TABLE refractor_card_state DROP COLUMN IF EXISTS variant;

132
run-local.sh Executable file
View File

@ -0,0 +1,132 @@
#!/usr/bin/env bash
# run-local.sh — Spin up the Paper Dynasty Database API locally for testing.
#
# Connects to the dev PostgreSQL on the homelab (10.10.0.42) so you get real
# card data for rendering. Playwright Chromium must be installed locally
# (it already is on this workstation).
#
# Usage:
# ./run-local.sh # start on default port 8000
# ./run-local.sh 8001 # start on custom port
# ./run-local.sh --stop # kill a running instance
#
# Card rendering test URLs (after startup):
# HTML preview: http://localhost:8000/api/v2/players/{id}/battingcard/{date}/{variant}?html=True
# PNG render: http://localhost:8000/api/v2/players/{id}/battingcard/{date}/{variant}
# API docs: http://localhost:8000/api/docs
set -euo pipefail
cd "$(dirname "$0")"
PORT="${1:-8000}"
PIDFILE=".run-local.pid"
LOGFILE="logs/database/run-local.log"
# ── Stop mode ────────────────────────────────────────────────────────────────
if [[ "${1:-}" == "--stop" ]]; then
if [[ -f "$PIDFILE" ]]; then
pid=$(cat "$PIDFILE")
if kill -0 "$pid" 2>/dev/null; then
kill "$pid"
echo "Stopped local API (PID $pid)"
else
echo "PID $pid not running (stale pidfile)"
fi
rm -f "$PIDFILE"
else
echo "No pidfile found — nothing to stop"
fi
exit 0
fi
# ── Pre-flight checks ───────────────────────────────────────────────────────
if [[ -f "$PIDFILE" ]] && kill -0 "$(cat "$PIDFILE")" 2>/dev/null; then
echo "Already running (PID $(cat "$PIDFILE")). Use './run-local.sh --stop' first."
exit 1
fi
# Check Python deps are importable
python -c "import fastapi, peewee, playwright" 2>/dev/null || {
echo "Missing Python dependencies. Install with: pip install -r requirements.txt"
exit 1
}
# Check Playwright Chromium is available
python -c "
from playwright.sync_api import sync_playwright
p = sync_playwright().start()
print(p.chromium.executable_path)
p.stop()
" >/dev/null 2>&1 || {
echo "Playwright Chromium not installed. Run: playwright install chromium"
exit 1
}
# Check dev DB is reachable
DB_HOST="${POSTGRES_HOST_LOCAL:-10.10.0.42}"
python -c "
import socket, sys
s = socket.create_connection((sys.argv[1], 5432), timeout=3)
s.close()
" "$DB_HOST" 2>/dev/null || {
echo "Cannot reach dev PostgreSQL at ${DB_HOST}:5432 — is the homelab up?"
exit 1
}
# ── Ensure directories exist ────────────────────────────────────────────────
mkdir -p logs/database
mkdir -p storage/cards
# ── Launch ───────────────────────────────────────────────────────────────────
echo "Starting Paper Dynasty Database API on http://localhost:${PORT}"
echo " DB: paperdynasty_dev @ 10.10.0.42"
echo " Logs: ${LOGFILE}"
echo ""
# Load .env, then .env.local overrides (for passwords not in version control)
set -a
# shellcheck source=/dev/null
[[ -f .env ]] && source .env
[[ -f .env.local ]] && source .env.local
set +a
# Override DB host to point at the dev server's IP (not Docker network name)
export DATABASE_TYPE=postgresql
export POSTGRES_HOST="$DB_HOST"
export POSTGRES_PORT="${POSTGRES_PORT:-5432}"
export POSTGRES_DB="${POSTGRES_DB:-paperdynasty_dev}"
export POSTGRES_USER="${POSTGRES_USER:-sba_admin}"
export LOG_LEVEL=INFO
export TESTING=True
if [[ -z "${POSTGRES_PASSWORD:-}" || "$POSTGRES_PASSWORD" == "your_production_password" ]]; then
echo "ERROR: POSTGRES_PASSWORD not set or is the placeholder value."
echo "Create .env.local with: POSTGRES_PASSWORD=<actual password>"
exit 1
fi
uvicorn app.main:app \
--host 0.0.0.0 \
--port "$PORT" \
--reload \
--reload-dir app \
--reload-dir storage/templates \
2>&1 | tee "$LOGFILE" &
echo $! >"$PIDFILE"
sleep 2
if kill -0 "$(cat "$PIDFILE")" 2>/dev/null; then
echo ""
echo "API running (PID $(cat "$PIDFILE"))."
echo ""
echo "Quick test URLs:"
echo " API docs: http://localhost:${PORT}/api/docs"
echo " Health: curl -s http://localhost:${PORT}/api/v2/players/1/battingcard?html=True"
echo ""
echo "Stop with: ./run-local.sh --stop"
else
echo "Failed to start — check ${LOGFILE}"
rm -f "$PIDFILE"
exit 1
fi

View File

@ -2,9 +2,26 @@
<html lang="en">
<head>
{% include 'style.html' %}
{% include 'tier_style.html' %}
</head>
<body>
<div id="fullCard" style="width: 1200px; height: 600px;">
{% if refractor_tier is defined and refractor_tier > 0 %}
{%- set diamond_colors = {
1: {'color': '#1a6b1a', 'highlight': '#40b040'},
2: {'color': '#2070b0', 'highlight': '#50a0e8'},
3: {'color': '#a82020', 'highlight': '#e85050'},
4: {'color': '#6b2d8e', 'highlight': '#a060d0'},
} -%}
{%- set dc = diamond_colors[refractor_tier] -%}
{%- set filled_bg = 'linear-gradient(135deg, ' ~ dc.highlight ~ ' 0%, ' ~ dc.color ~ ' 50%, ' ~ dc.color ~ ' 100%)' -%}
<div class="tier-diamond{% if refractor_tier == 4 %} diamond-glow{% endif %}">
<div class="diamond-quad{% if refractor_tier >= 2 %} filled{% endif %}" {% if refractor_tier >= 2 %}style="background: {{ filled_bg }};"{% endif %}></div>
<div class="diamond-quad{% if refractor_tier >= 1 %} filled{% endif %}" {% if refractor_tier >= 1 %}style="background: {{ filled_bg }};"{% endif %}></div>
<div class="diamond-quad{% if refractor_tier >= 3 %} filled{% endif %}" {% if refractor_tier >= 3 %}style="background: {{ filled_bg }};"{% endif %}></div>
<div class="diamond-quad{% if refractor_tier >= 4 %} filled{% endif %}" {% if refractor_tier >= 4 %}style="background: {{ filled_bg }};"{% endif %}></div>
</div>
{% endif %}
<div id="header" class="row-wrapper header-text border-bot" style="height: 65px">
<!-- <div id="headerLeft" style="flex-grow: 3; height: auto">-->
<div id="headerLeft" style="width: 477px; height: auto">

View File

@ -0,0 +1,216 @@
<style>
#fullCard {
position: relative;
overflow: hidden;
}
</style>
{% if refractor_tier is defined and refractor_tier > 0 %}
<style>
.tier-diamond {
position: absolute;
left: 597px;
top: 78.5px;
transform: translate(-50%, -50%) rotate(45deg);
display: grid;
grid-template: 1fr 1fr / 1fr 1fr;
gap: 2px;
z-index: 20;
pointer-events: none;
background: rgba(0,0,0,0.75);
border-radius: 2px;
box-shadow: 0 0 0 1.5px rgba(0,0,0,0.7), 0 2px 5px rgba(0,0,0,0.5);
}
.diamond-quad {
width: 19px;
height: 19px;
background: rgba(0,0,0,0.3);
}
.diamond-quad.filled {
box-shadow: inset 0 1px 2px rgba(255,255,255,0.45),
inset 0 -1px 2px rgba(0,0,0,0.35),
inset 1px 0 2px rgba(255,255,255,0.15);
}
{% if refractor_tier == 1 %}
/* T1 — Base Chrome */
#header {
background: linear-gradient(135deg, rgba(185,195,210,0.25) 0%, rgba(210,218,228,0.35) 50%, rgba(185,195,210,0.25) 100%), #ffffff;
}
.border-bot {
border-bottom-color: #8e9baf;
border-bottom-width: 4px;
}
#resultHeader.border-bot {
border-bottom-width: 3px;
}
.border-right-thick {
border-right-color: #8e9baf;
}
.border-right-thin {
border-right-color: #8e9baf;
}
.vline {
border-left-color: #8e9baf;
}
{% elif refractor_tier == 2 %}
/* T2 — Refractor */
#header {
background: linear-gradient(135deg, rgba(100,155,230,0.28) 0%, rgba(155,90,220,0.18) 25%, rgba(90,200,210,0.24) 50%, rgba(185,80,170,0.16) 75%, rgba(100,155,230,0.28) 100%), #ffffff;
}
#fullCard {
box-shadow: inset 0 0 14px 3px rgba(90,143,207,0.22);
}
.border-bot {
border-bottom-color: #7a9cc4;
border-bottom-width: 4px;
}
#resultHeader .border-right-thick {
border-right-width: 6px;
}
.border-right-thick {
border-right-color: #7a9cc4;
}
.border-right-thin {
border-right-color: #7a9cc4;
border-right-width: 3px;
}
.vline {
border-left-color: #7a9cc4;
}
.blue-gradient {
background-image: linear-gradient(to right, rgba(60,110,200,1), rgba(100,55,185,0.55), rgba(60,110,200,1));
}
.red-gradient {
background-image: linear-gradient(to right, rgba(190,35,80,1), rgba(165,25,100,0.55), rgba(190,35,80,1));
}
{% elif refractor_tier == 3 %}
/* T3 — Gold Refractor */
#header {
background: linear-gradient(135deg, rgba(195,155,35,0.26) 0%, rgba(235,200,70,0.2) 50%, rgba(195,155,35,0.26) 100%), #ffffff;
overflow: hidden;
position: relative;
}
#fullCard {
box-shadow: inset 0 0 16px 4px rgba(200,165,48,0.22);
}
.border-bot {
border-bottom-color: #c9a94e;
border-bottom-width: 4px;
}
.border-right-thick {
border-right-color: #c9a94e;
}
.border-right-thin {
border-right-color: #c9a94e;
border-right-width: 3px;
}
.vline {
border-left-color: #c9a94e;
}
.blue-gradient {
background-image: linear-gradient(to right, rgba(195,160,40,1), rgba(220,185,60,0.55), rgba(195,160,40,1));
}
.red-gradient {
background-image: linear-gradient(to right, rgba(195,160,40,1), rgba(220,185,60,0.55), rgba(195,160,40,1));
}
/* T3 shimmer animation — paused for static PNG capture */
@keyframes t3-shimmer {
0% { transform: translateX(-130%); }
100% { transform: translateX(230%); }
}
#header::after {
content: '';
position: absolute;
top: 0; left: 0; right: 0; bottom: 0;
background: linear-gradient(
105deg,
transparent 38%,
rgba(255,240,140,0.18) 44%,
rgba(255,220,80,0.38) 50%,
rgba(255,200,60,0.30) 53%,
rgba(255,240,140,0.14) 58%,
transparent 64%
);
pointer-events: none;
z-index: 5;
animation: t3-shimmer 2.5s ease-in-out infinite;
animation-play-state: paused;
}
{% elif refractor_tier == 4 %}
/* T4 — Superfractor */
#header {
background: #ffffff;
overflow: hidden;
position: relative;
}
#fullCard {
box-shadow: inset 0 0 22px 6px rgba(45,212,191,0.28), inset 0 0 39px 9px rgba(200,165,48,0.15);
}
.border-bot {
border-bottom-color: #c9a94e;
border-bottom-width: 4px;
}
.border-right-thick {
border-right-color: #c9a94e;
}
.border-right-thin {
border-right-color: #c9a94e;
}
.vline {
border-left-color: #c9a94e;
}
.blue-gradient {
background-image: linear-gradient(to right, rgba(195,160,40,1), rgba(220,185,60,0.55), rgba(195,160,40,1));
}
.red-gradient {
background-image: linear-gradient(to right, rgba(195,160,40,1), rgba(220,185,60,0.55), rgba(195,160,40,1));
}
/* T4 prismatic header sweep — paused for static PNG capture */
@keyframes t4-prismatic-sweep {
0% { transform: translateX(0%); }
100% { transform: translateX(-50%); }
}
#header::after {
content: '';
position: absolute;
top: 0; left: 0;
width: 200%; height: 100%;
background: linear-gradient(135deg,
transparent 2%, rgba(255,100,100,0.28) 8%, rgba(255,200,50,0.32) 14%,
rgba(100,255,150,0.30) 20%, rgba(50,190,255,0.32) 26%, rgba(140,80,255,0.28) 32%,
rgba(255,100,180,0.24) 38%, transparent 44%,
transparent 52%, rgba(255,100,100,0.28) 58%, rgba(255,200,50,0.32) 64%,
rgba(100,255,150,0.30) 70%, rgba(50,190,255,0.32) 76%, rgba(140,80,255,0.28) 82%,
rgba(255,100,180,0.24) 88%, transparent 94%
);
z-index: 1;
pointer-events: none;
animation: t4-prismatic-sweep 6s linear infinite;
animation-play-state: paused;
}
#header > * { z-index: 2; }
/* T4 diamond glow pulse — paused for static PNG */
@keyframes diamond-glow-pulse {
0%, 100% { box-shadow: 0 0 0 1.5px rgba(0,0,0,0.7), 0 2px 5px rgba(0,0,0,0.5),
0 0 8px 2px rgba(107,45,142,0.6); }
50% { box-shadow: 0 0 0 1.5px rgba(0,0,0,0.5), 0 2px 4px rgba(0,0,0,0.3),
0 0 14px 5px rgba(107,45,142,0.8),
0 0 24px 8px rgba(107,45,142,0.3); }
}
.tier-diamond.diamond-glow {
animation: diamond-glow-pulse 2s ease-in-out infinite;
animation-play-state: paused;
}
{% endif %}
</style>
{% endif %}

View File

@ -44,10 +44,13 @@ from app.db_engine import (
BattingSeasonStats,
PitchingSeasonStats,
ProcessedGame,
EvolutionTrack,
EvolutionCardState,
EvolutionTierBoost,
EvolutionCosmetic,
BattingCard,
PitchingCard,
RefractorTrack,
RefractorCardState,
RefractorTierBoost,
RefractorCosmetic,
RefractorBoostAudit,
ScoutOpportunity,
ScoutClaim,
)
@ -76,10 +79,13 @@ _TEST_MODELS = [
ProcessedGame,
ScoutOpportunity,
ScoutClaim,
EvolutionTrack,
EvolutionCardState,
EvolutionTierBoost,
EvolutionCosmetic,
RefractorTrack,
RefractorCardState,
RefractorTierBoost,
RefractorCosmetic,
BattingCard,
PitchingCard,
RefractorBoostAudit,
]
@ -164,8 +170,8 @@ def team():
@pytest.fixture
def track():
"""A minimal EvolutionTrack for batter cards."""
return EvolutionTrack.create(
"""A minimal RefractorTrack for batter cards."""
return RefractorTrack.create(
name="Batter Track",
card_type="batter",
formula="pa + tb * 2",
@ -177,7 +183,7 @@ def track():
# ---------------------------------------------------------------------------
# PostgreSQL integration fixture (used by test_evolution_*_api.py)
# PostgreSQL integration fixture (used by test_refractor_*_api.py)
# ---------------------------------------------------------------------------

View File

@ -1,361 +0,0 @@
"""Tests for the evolution evaluator service (WP-08).
Unit tests verify tier assignment, advancement, partial progress, idempotency,
full evolution, and no-regression behaviour without touching any database,
using stub Peewee models bound to an in-memory SQLite database.
The formula engine (WP-09) and Peewee models (WP-05/WP-07) are not imported
from db_engine/formula_engine; instead the tests supply minimal stubs and
inject them via the _stats_model, _state_model, _compute_value_fn, and
_tier_from_value_fn overrides on evaluate_card().
Stub track thresholds (batter):
T1: 37 T2: 149 T3: 448 T4: 896
Useful reference values:
value=30 T0 (below T1=37)
value=50 T1 (37 <= 50 < 149)
value=100 T1 (stays T1; T2 threshold is 149)
value=160 T2 (149 <= 160 < 448)
value=900 T4 (>= 896) fully_evolved
"""
import pytest
from datetime import datetime
from peewee import (
BooleanField,
CharField,
DateTimeField,
FloatField,
ForeignKeyField,
IntegerField,
Model,
SqliteDatabase,
)
from app.services.evolution_evaluator import evaluate_card
# ---------------------------------------------------------------------------
# Stub models — mirror WP-01/WP-04/WP-07 schema without importing db_engine
# ---------------------------------------------------------------------------
_test_db = SqliteDatabase(":memory:")
class TrackStub(Model):
"""Minimal EvolutionTrack stub for evaluator tests."""
card_type = CharField(unique=True)
t1_threshold = IntegerField()
t2_threshold = IntegerField()
t3_threshold = IntegerField()
t4_threshold = IntegerField()
class Meta:
database = _test_db
table_name = "evolution_track"
class CardStateStub(Model):
"""Minimal EvolutionCardState stub for evaluator tests."""
player_id = IntegerField()
team_id = IntegerField()
track = ForeignKeyField(TrackStub)
current_tier = IntegerField(default=0)
current_value = FloatField(default=0.0)
fully_evolved = BooleanField(default=False)
last_evaluated_at = DateTimeField(null=True)
class Meta:
database = _test_db
table_name = "evolution_card_state"
indexes = ((("player_id", "team_id"), True),)
class StatsStub(Model):
"""Minimal PlayerSeasonStats stub for evaluator tests."""
player_id = IntegerField()
team_id = IntegerField()
season = IntegerField()
pa = IntegerField(default=0)
hits = IntegerField(default=0)
doubles = IntegerField(default=0)
triples = IntegerField(default=0)
hr = IntegerField(default=0)
outs = IntegerField(default=0)
strikeouts = IntegerField(default=0)
class Meta:
database = _test_db
table_name = "player_season_stats"
# ---------------------------------------------------------------------------
# Formula stubs — avoid importing app.services.formula_engine before WP-09
# ---------------------------------------------------------------------------
def _compute_value(card_type: str, stats) -> float:
"""Stub compute_value_for_track: returns pa for batter, outs/3+k for pitchers."""
if card_type == "batter":
singles = stats.hits - stats.doubles - stats.triples - stats.hr
tb = singles + 2 * stats.doubles + 3 * stats.triples + 4 * stats.hr
return float(stats.pa + tb * 2)
return stats.outs / 3 + stats.strikeouts
def _tier_from_value(value: float, track) -> int:
"""Stub tier_from_value using TrackStub fields t1_threshold/t2_threshold/etc."""
if isinstance(track, dict):
t1, t2, t3, t4 = (
track["t1_threshold"],
track["t2_threshold"],
track["t3_threshold"],
track["t4_threshold"],
)
else:
t1, t2, t3, t4 = (
track.t1_threshold,
track.t2_threshold,
track.t3_threshold,
track.t4_threshold,
)
if value >= t4:
return 4
if value >= t3:
return 3
if value >= t2:
return 2
if value >= t1:
return 1
return 0
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture(autouse=True)
def _db():
"""Create tables before each test and drop them afterwards."""
_test_db.connect(reuse_if_open=True)
_test_db.create_tables([TrackStub, CardStateStub, StatsStub])
yield
_test_db.drop_tables([StatsStub, CardStateStub, TrackStub])
@pytest.fixture()
def batter_track():
return TrackStub.create(
card_type="batter",
t1_threshold=37,
t2_threshold=149,
t3_threshold=448,
t4_threshold=896,
)
@pytest.fixture()
def sp_track():
return TrackStub.create(
card_type="sp",
t1_threshold=10,
t2_threshold=40,
t3_threshold=120,
t4_threshold=240,
)
def _make_state(player_id, team_id, track, current_tier=0, current_value=0.0):
return CardStateStub.create(
player_id=player_id,
team_id=team_id,
track=track,
current_tier=current_tier,
current_value=current_value,
fully_evolved=False,
last_evaluated_at=None,
)
def _make_stats(player_id, team_id, season, **kwargs):
return StatsStub.create(
player_id=player_id, team_id=team_id, season=season, **kwargs
)
def _eval(player_id, team_id):
return evaluate_card(
player_id,
team_id,
_stats_model=StatsStub,
_state_model=CardStateStub,
_compute_value_fn=_compute_value,
_tier_from_value_fn=_tier_from_value,
)
# ---------------------------------------------------------------------------
# Unit tests
# ---------------------------------------------------------------------------
class TestTierAssignment:
"""Tier assigned from computed value against track thresholds."""
def test_value_below_t1_stays_t0(self, batter_track):
"""value=30 is below T1 threshold (37) → tier stays 0."""
_make_state(1, 1, batter_track)
# pa=30, no extra hits → value = 30 + 0 = 30 < 37
_make_stats(1, 1, 1, pa=30)
result = _eval(1, 1)
assert result["current_tier"] == 0
def test_value_at_t1_threshold_assigns_tier_1(self, batter_track):
"""value=50 → T1 (37 <= 50 < 149)."""
_make_state(1, 1, batter_track)
# pa=50, no hits → value = 50 + 0 = 50
_make_stats(1, 1, 1, pa=50)
result = _eval(1, 1)
assert result["current_tier"] == 1
def test_tier_advancement_to_t2(self, batter_track):
"""value=160 → T2 (149 <= 160 < 448)."""
_make_state(1, 1, batter_track)
# pa=160, no hits → value = 160
_make_stats(1, 1, 1, pa=160)
result = _eval(1, 1)
assert result["current_tier"] == 2
def test_partial_progress_stays_t1(self, batter_track):
"""value=100 with T2=149 → stays T1, does not advance to T2."""
_make_state(1, 1, batter_track)
# pa=100 → value = 100, T2 threshold = 149 → tier 1
_make_stats(1, 1, 1, pa=100)
result = _eval(1, 1)
assert result["current_tier"] == 1
assert result["fully_evolved"] is False
def test_fully_evolved_at_t4(self, batter_track):
"""value >= T4 (896) → tier=4 and fully_evolved=True."""
_make_state(1, 1, batter_track)
# pa=900 → value = 900 >= 896
_make_stats(1, 1, 1, pa=900)
result = _eval(1, 1)
assert result["current_tier"] == 4
assert result["fully_evolved"] is True
class TestNoRegression:
"""current_tier never decreases."""
def test_tier_never_decreases(self, batter_track):
"""If current_tier=2 and new value only warrants T1, tier stays 2."""
# Seed state at tier 2
_make_state(1, 1, batter_track, current_tier=2, current_value=160.0)
# Sparse stats: value=50 → would be T1, but current is T2
_make_stats(1, 1, 1, pa=50)
result = _eval(1, 1)
assert result["current_tier"] == 2 # no regression
def test_tier_advances_when_value_improves(self, batter_track):
"""If current_tier=1 and new value warrants T3, tier advances to 3."""
_make_state(1, 1, batter_track, current_tier=1, current_value=50.0)
# pa=500 → value = 500 >= 448 → T3
_make_stats(1, 1, 1, pa=500)
result = _eval(1, 1)
assert result["current_tier"] == 3
class TestIdempotency:
"""Calling evaluate_card twice with same stats returns the same result."""
def test_idempotent_same_result(self, batter_track):
"""Two evaluations with identical stats produce the same tier and value."""
_make_state(1, 1, batter_track)
_make_stats(1, 1, 1, pa=160)
result1 = _eval(1, 1)
result2 = _eval(1, 1)
assert result1["current_tier"] == result2["current_tier"]
assert result1["current_value"] == result2["current_value"]
assert result1["fully_evolved"] == result2["fully_evolved"]
def test_idempotent_at_fully_evolved(self, batter_track):
"""Repeated evaluation at T4 remains fully_evolved=True."""
_make_state(1, 1, batter_track)
_make_stats(1, 1, 1, pa=900)
_eval(1, 1)
result = _eval(1, 1)
assert result["current_tier"] == 4
assert result["fully_evolved"] is True
class TestCareerTotals:
"""Stats are summed across all seasons for the player/team pair."""
def test_multi_season_stats_summed(self, batter_track):
"""Stats from two seasons are aggregated into a single career total."""
_make_state(1, 1, batter_track)
# Season 1: pa=80, Season 2: pa=90 → total pa=170 → value=170 → T2
_make_stats(1, 1, 1, pa=80)
_make_stats(1, 1, 2, pa=90)
result = _eval(1, 1)
assert result["current_tier"] == 2
assert result["current_value"] == 170.0
def test_zero_stats_stays_t0(self, batter_track):
"""No stats rows → all zeros → value=0 → tier=0."""
_make_state(1, 1, batter_track)
result = _eval(1, 1)
assert result["current_tier"] == 0
assert result["current_value"] == 0.0
def test_other_team_stats_not_included(self, batter_track):
"""Stats for the same player on a different team are not counted."""
_make_state(1, 1, batter_track)
_make_stats(1, 1, 1, pa=50)
# Same player, different team — should not count
_make_stats(1, 2, 1, pa=200)
result = _eval(1, 1)
# Only pa=50 counted → value=50 → T1
assert result["current_tier"] == 1
assert result["current_value"] == 50.0
class TestMissingState:
"""ValueError when no card state exists for (player_id, team_id)."""
def test_missing_state_raises(self, batter_track):
"""evaluate_card raises ValueError when no state row exists."""
# No card state created
with pytest.raises(ValueError, match="No evolution_card_state"):
_eval(99, 99)
class TestReturnShape:
"""Return dict has the expected keys and types."""
def test_return_keys(self, batter_track):
"""Result dict contains all expected keys."""
_make_state(1, 1, batter_track)
result = _eval(1, 1)
assert set(result.keys()) == {
"player_id",
"team_id",
"current_tier",
"current_value",
"fully_evolved",
"last_evaluated_at",
}
def test_last_evaluated_at_is_iso_string(self, batter_track):
"""last_evaluated_at is a non-empty ISO-8601 string."""
_make_state(1, 1, batter_track)
result = _eval(1, 1)
ts = result["last_evaluated_at"]
assert isinstance(ts, str) and len(ts) > 0
# Must be parseable as a datetime
datetime.fromisoformat(ts)

View File

@ -1,159 +0,0 @@
"""
Tests for app/seed/evolution_tracks.py seed_evolution_tracks().
What: Verify that the JSON-driven seed function correctly creates, counts,
and idempotently updates EvolutionTrack rows in the database.
Why: The seed is the single source of truth for track configuration. A
regression here (duplicates, wrong thresholds, missing formula) would
silently corrupt evolution scoring for every card in the system.
Each test operates on a fresh in-memory SQLite database provided by the
autouse `setup_test_db` fixture in conftest.py. The seed reads its data
from `app/seed/evolution_tracks.json` on disk, so the tests also serve as
a light integration check between the JSON file and the Peewee model.
"""
import json
from pathlib import Path
import pytest
from app.db_engine import EvolutionTrack
from app.seed.evolution_tracks import seed_evolution_tracks
# Path to the JSON fixture that the seed reads from at runtime
_JSON_PATH = Path(__file__).parent.parent / "app" / "seed" / "evolution_tracks.json"
@pytest.fixture
def json_tracks():
"""Load the raw JSON definitions so tests can assert against them.
This avoids hardcoding expected values if the JSON changes, tests
automatically follow without needing manual updates.
"""
return json.loads(_JSON_PATH.read_text(encoding="utf-8"))
def test_seed_creates_three_tracks(json_tracks):
"""After one seed call, exactly 3 EvolutionTrack rows must exist.
Why: The JSON currently defines three card-type tracks (batter, sp, rp).
If the count is wrong the system would either be missing tracks
(evolution disabled for a card type) or have phantom extras.
"""
seed_evolution_tracks()
assert EvolutionTrack.select().count() == 3
def test_seed_correct_card_types(json_tracks):
"""The set of card_type values persisted must match the JSON exactly.
Why: card_type is used as a discriminator throughout the evolution engine.
An unexpected value (e.g. 'pitcher' instead of 'sp') would cause
track-lookup misses and silently skip evolution scoring for that role.
"""
seed_evolution_tracks()
expected_types = {d["card_type"] for d in json_tracks}
actual_types = {t.card_type for t in EvolutionTrack.select()}
assert actual_types == expected_types
def test_seed_thresholds_ascending():
"""For every track, t1 < t2 < t3 < t4.
Why: The evolution engine uses these thresholds to determine tier
boundaries. If they are not strictly ascending, tier comparisons
would produce incorrect or undefined results (e.g. a player could
simultaneously satisfy tier 3 and not satisfy tier 2).
"""
seed_evolution_tracks()
for track in EvolutionTrack.select():
assert (
track.t1_threshold < track.t2_threshold
), f"{track.name}: t1 ({track.t1_threshold}) >= t2 ({track.t2_threshold})"
assert (
track.t2_threshold < track.t3_threshold
), f"{track.name}: t2 ({track.t2_threshold}) >= t3 ({track.t3_threshold})"
assert (
track.t3_threshold < track.t4_threshold
), f"{track.name}: t3 ({track.t3_threshold}) >= t4 ({track.t4_threshold})"
def test_seed_thresholds_positive():
"""All tier threshold values must be strictly greater than zero.
Why: A zero or negative threshold would mean a card starts the game
already evolved (tier >= 1 at 0 accumulated stat points), which would
bypass the entire progression system.
"""
seed_evolution_tracks()
for track in EvolutionTrack.select():
assert track.t1_threshold > 0, f"{track.name}: t1_threshold is not positive"
assert track.t2_threshold > 0, f"{track.name}: t2_threshold is not positive"
assert track.t3_threshold > 0, f"{track.name}: t3_threshold is not positive"
assert track.t4_threshold > 0, f"{track.name}: t4_threshold is not positive"
def test_seed_formula_present():
"""Every persisted track must have a non-empty formula string.
Why: The formula is evaluated at runtime to compute a player's evolution
score. An empty formula would cause either a Python eval error or
silently produce 0 for every player, halting all evolution progress.
"""
seed_evolution_tracks()
for track in EvolutionTrack.select():
assert (
track.formula and track.formula.strip()
), f"{track.name}: formula is empty or whitespace-only"
def test_seed_idempotent():
"""Calling seed_evolution_tracks() twice must still yield exactly 3 rows.
Why: The seed is designed to be safe to re-run (e.g. as part of a
migration or CI bootstrap). If it inserts duplicates on a second call,
the unique constraint on EvolutionTrack.name would raise an IntegrityError
in PostgreSQL, and in SQLite it would silently create phantom rows that
corrupt tier-lookup joins.
"""
seed_evolution_tracks()
seed_evolution_tracks()
assert EvolutionTrack.select().count() == 3
def test_seed_updates_on_rerun(json_tracks):
"""A second seed call must restore any manually changed threshold to the JSON value.
What: Seed once, manually mutate a threshold in the DB, then seed again.
Assert that the threshold is now back to the JSON-defined value.
Why: The seed must act as the authoritative source of truth. If
re-seeding does not overwrite local changes, configuration drift can
build up silently and the production database would diverge from the
checked-in JSON without any visible error.
"""
seed_evolution_tracks()
# Pick the first track and corrupt its t1_threshold
first_def = json_tracks[0]
track = EvolutionTrack.get(EvolutionTrack.name == first_def["name"])
original_t1 = track.t1_threshold
corrupted_value = original_t1 + 9999
track.t1_threshold = corrupted_value
track.save()
# Confirm the corruption took effect before re-seeding
track_check = EvolutionTrack.get(EvolutionTrack.name == first_def["name"])
assert track_check.t1_threshold == corrupted_value
# Re-seed — should restore the JSON value
seed_evolution_tracks()
restored = EvolutionTrack.get(EvolutionTrack.name == first_def["name"])
assert restored.t1_threshold == first_def["t1_threshold"], (
f"Expected t1_threshold={first_def['t1_threshold']} after re-seed, "
f"got {restored.t1_threshold}"
)

View File

@ -1,609 +0,0 @@
"""Integration tests for the evolution card state API endpoints (WP-07).
Tests cover:
GET /api/v2/teams/{team_id}/evolutions
GET /api/v2/evolution/cards/{card_id}
All tests require a live PostgreSQL connection (POSTGRES_HOST env var) and
assume the evolution schema migration (WP-04) has already been applied.
Tests auto-skip when POSTGRES_HOST is not set.
Test data is inserted via psycopg2 before each module fixture runs and
cleaned up in teardown so the tests are repeatable. ON CONFLICT / CASCADE
clauses keep the table clean even if a previous run did not complete teardown.
Object graph built by fixtures
-------------------------------
rarity_row -- a seeded rarity row
cardset_row -- a seeded cardset row
player_row -- a seeded player row (FK: rarity, cardset)
team_row -- a seeded team row
track_row -- a seeded evolution_track row (batter)
card_row -- a seeded card row (FK: player, team, pack, pack_type, cardset)
state_row -- a seeded evolution_card_state row (FK: player, team, track)
Test matrix
-----------
test_list_team_evolutions -- baseline: returns count + items for a team
test_list_filter_by_card_type -- card_type query param filters by track.card_type
test_list_filter_by_tier -- tier query param filters by current_tier
test_list_pagination -- page/per_page params slice results correctly
test_get_card_state_shape -- single card returns all required response fields
test_get_card_state_next_threshold -- next_threshold is the threshold for tier above current
test_get_card_id_resolves_player -- card_id joins Card -> Player/Team -> EvolutionCardState
test_get_card_404_no_state -- card with no EvolutionCardState returns 404
test_duplicate_cards_share_state -- two cards same player+team return the same state row
test_auth_required -- missing token returns 401 on both endpoints
"""
import os
import pytest
from fastapi.testclient import TestClient
POSTGRES_HOST = os.environ.get("POSTGRES_HOST")
_skip_no_pg = pytest.mark.skipif(
not POSTGRES_HOST, reason="POSTGRES_HOST not set — integration tests skipped"
)
AUTH_HEADER = {"Authorization": f"Bearer {os.environ.get('API_TOKEN', 'test-token')}"}
# ---------------------------------------------------------------------------
# Shared fixtures: seed and clean up the full object graph
# ---------------------------------------------------------------------------
@pytest.fixture(scope="module")
def seeded_data(pg_conn):
"""Insert all rows needed for state API tests; delete them after the module.
Returns a dict with the integer IDs of every inserted row so individual
test functions can reference them by key.
Insertion order respects FK dependencies:
rarity -> cardset -> player
pack_type (needs cardset) -> pack (needs team + pack_type) -> card
evolution_track -> evolution_card_state
"""
cur = pg_conn.cursor()
# Rarity
cur.execute(
"""
INSERT INTO rarity (value, name, color)
VALUES (99, 'WP07TestRarity', '#123456')
ON CONFLICT (name) DO UPDATE SET value = EXCLUDED.value
RETURNING id
"""
)
rarity_id = cur.fetchone()[0]
# Cardset
cur.execute(
"""
INSERT INTO cardset (name, description, total_cards)
VALUES ('WP07 Test Set', 'evo state api tests', 1)
ON CONFLICT (name) DO UPDATE SET description = EXCLUDED.description
RETURNING id
"""
)
cardset_id = cur.fetchone()[0]
# Player 1 (batter)
cur.execute(
"""
INSERT INTO player (p_name, rarity_id, cardset_id, set_num, pos_1,
image, mlbclub, franchise, description)
VALUES ('WP07 Batter', %s, %s, 901, '1B',
'https://example.com/wp07_b.png', 'TST', 'TST', 'wp07 test batter')
RETURNING player_id
""",
(rarity_id, cardset_id),
)
player_id = cur.fetchone()[0]
# Player 2 (sp) for cross-card_type filter test
cur.execute(
"""
INSERT INTO player (p_name, rarity_id, cardset_id, set_num, pos_1,
image, mlbclub, franchise, description)
VALUES ('WP07 Pitcher', %s, %s, 902, 'SP',
'https://example.com/wp07_p.png', 'TST', 'TST', 'wp07 test pitcher')
RETURNING player_id
""",
(rarity_id, cardset_id),
)
player2_id = cur.fetchone()[0]
# Team
cur.execute(
"""
INSERT INTO team (abbrev, sname, lname, gmid, gmname, gsheet,
wallet, team_value, collection_value, season, is_ai)
VALUES ('WP7', 'WP07', 'WP07 Test Team', 700000001, 'wp07user',
'https://docs.google.com/wp07', 0, 0, 0, 11, false)
RETURNING id
"""
)
team_id = cur.fetchone()[0]
# Evolution tracks
cur.execute(
"""
INSERT INTO evolution_track (name, card_type, formula,
t1_threshold, t2_threshold,
t3_threshold, t4_threshold)
VALUES ('WP07 Batter Track', 'batter', 'pa + tb * 2', 37, 149, 448, 896)
ON CONFLICT (name) DO UPDATE SET card_type = EXCLUDED.card_type
RETURNING id
"""
)
batter_track_id = cur.fetchone()[0]
cur.execute(
"""
INSERT INTO evolution_track (name, card_type, formula,
t1_threshold, t2_threshold,
t3_threshold, t4_threshold)
VALUES ('WP07 SP Track', 'sp', 'ip + k', 10, 40, 120, 240)
ON CONFLICT (name) DO UPDATE SET card_type = EXCLUDED.card_type
RETURNING id
"""
)
sp_track_id = cur.fetchone()[0]
# Pack type + pack (needed as FK parent for Card)
cur.execute(
"""
INSERT INTO pack_type (name, cost, card_count, cardset_id)
VALUES ('WP07 Pack Type', 100, 5, %s)
RETURNING id
""",
(cardset_id,),
)
pack_type_id = cur.fetchone()[0]
cur.execute(
"""
INSERT INTO pack (team_id, pack_type_id)
VALUES (%s, %s)
RETURNING id
""",
(team_id, pack_type_id),
)
pack_id = cur.fetchone()[0]
# Card linking batter player to team
cur.execute(
"""
INSERT INTO card (player_id, team_id, pack_id, value)
VALUES (%s, %s, %s, 0)
RETURNING id
""",
(player_id, team_id, pack_id),
)
card_id = cur.fetchone()[0]
# Second card for same player+team (shared-state test)
cur.execute(
"""
INSERT INTO pack (team_id, pack_type_id)
VALUES (%s, %s)
RETURNING id
""",
(team_id, pack_type_id),
)
pack2_id = cur.fetchone()[0]
cur.execute(
"""
INSERT INTO card (player_id, team_id, pack_id, value)
VALUES (%s, %s, %s, 0)
RETURNING id
""",
(player_id, team_id, pack2_id),
)
card2_id = cur.fetchone()[0]
# Card with NO state (404 test)
cur.execute(
"""
INSERT INTO pack (team_id, pack_type_id)
VALUES (%s, %s)
RETURNING id
""",
(team_id, pack_type_id),
)
pack3_id = cur.fetchone()[0]
cur.execute(
"""
INSERT INTO card (player_id, team_id, pack_id, value)
VALUES (%s, %s, %s, 0)
RETURNING id
""",
(player2_id, team_id, pack3_id),
)
card_no_state_id = cur.fetchone()[0]
# Evolution card states
# Batter player at tier 1, value 87.5
cur.execute(
"""
INSERT INTO evolution_card_state
(player_id, team_id, track_id, current_tier, current_value,
fully_evolved, last_evaluated_at)
VALUES (%s, %s, %s, 1, 87.5, false, '2026-03-12T14:00:00Z')
RETURNING id
""",
(player_id, team_id, batter_track_id),
)
state_id = cur.fetchone()[0]
pg_conn.commit()
yield {
"rarity_id": rarity_id,
"cardset_id": cardset_id,
"player_id": player_id,
"player2_id": player2_id,
"team_id": team_id,
"batter_track_id": batter_track_id,
"sp_track_id": sp_track_id,
"pack_type_id": pack_type_id,
"card_id": card_id,
"card2_id": card2_id,
"card_no_state_id": card_no_state_id,
"state_id": state_id,
}
# Teardown: delete in reverse FK order
cur.execute("DELETE FROM evolution_card_state WHERE id = %s", (state_id,))
cur.execute(
"DELETE FROM card WHERE id = ANY(%s)",
([card_id, card2_id, card_no_state_id],),
)
cur.execute("DELETE FROM pack WHERE id = ANY(%s)", ([pack_id, pack2_id, pack3_id],))
cur.execute("DELETE FROM pack_type WHERE id = %s", (pack_type_id,))
cur.execute(
"DELETE FROM evolution_track WHERE id = ANY(%s)",
([batter_track_id, sp_track_id],),
)
cur.execute(
"DELETE FROM player WHERE player_id = ANY(%s)", ([player_id, player2_id],)
)
cur.execute("DELETE FROM team WHERE id = %s", (team_id,))
cur.execute("DELETE FROM cardset WHERE id = %s", (cardset_id,))
cur.execute("DELETE FROM rarity WHERE id = %s", (rarity_id,))
pg_conn.commit()
@pytest.fixture(scope="module")
def client():
"""FastAPI TestClient backed by the real PostgreSQL database."""
from app.main import app
with TestClient(app) as c:
yield c
# ---------------------------------------------------------------------------
# Tests: GET /api/v2/teams/{team_id}/evolutions
# ---------------------------------------------------------------------------
@_skip_no_pg
def test_list_team_evolutions(client, seeded_data):
"""GET /teams/{id}/evolutions returns count=1 and one item for the seeded state.
Verifies the basic list response shape: a dict with 'count' and 'items',
and that the single item contains player_id, team_id, and current_tier.
"""
team_id = seeded_data["team_id"]
resp = client.get(f"/api/v2/teams/{team_id}/evolutions", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["count"] == 1
assert len(data["items"]) == 1
item = data["items"][0]
assert item["player_id"] == seeded_data["player_id"]
assert item["team_id"] == team_id
assert item["current_tier"] == 1
@_skip_no_pg
def test_list_filter_by_card_type(client, seeded_data, pg_conn):
"""card_type filter includes states whose track.card_type matches and excludes others.
Seeds a second evolution_card_state for player2 (sp track) then queries
card_type=batter (returns 1) and card_type=sp (returns 1).
Verifies the JOIN to evolution_track and the WHERE predicate on card_type.
"""
cur = pg_conn.cursor()
# Add a state for the sp player so we have two types in this team
cur.execute(
"""
INSERT INTO evolution_card_state
(player_id, team_id, track_id, current_tier, current_value, fully_evolved)
VALUES (%s, %s, %s, 0, 0.0, false)
RETURNING id
""",
(seeded_data["player2_id"], seeded_data["team_id"], seeded_data["sp_track_id"]),
)
sp_state_id = cur.fetchone()[0]
pg_conn.commit()
try:
team_id = seeded_data["team_id"]
resp_batter = client.get(
f"/api/v2/teams/{team_id}/evolutions?card_type=batter", headers=AUTH_HEADER
)
assert resp_batter.status_code == 200
batter_data = resp_batter.json()
assert batter_data["count"] == 1
assert batter_data["items"][0]["player_id"] == seeded_data["player_id"]
resp_sp = client.get(
f"/api/v2/teams/{team_id}/evolutions?card_type=sp", headers=AUTH_HEADER
)
assert resp_sp.status_code == 200
sp_data = resp_sp.json()
assert sp_data["count"] == 1
assert sp_data["items"][0]["player_id"] == seeded_data["player2_id"]
finally:
cur.execute("DELETE FROM evolution_card_state WHERE id = %s", (sp_state_id,))
pg_conn.commit()
@_skip_no_pg
def test_list_filter_by_tier(client, seeded_data, pg_conn):
"""tier filter includes only states at the specified current_tier.
The base fixture has player1 at tier=1. This test temporarily advances
it to tier=2, then queries tier=1 (should return 0) and tier=2 (should
return 1). Restores to tier=1 after assertions.
"""
cur = pg_conn.cursor()
# Advance to tier 2
cur.execute(
"UPDATE evolution_card_state SET current_tier = 2 WHERE id = %s",
(seeded_data["state_id"],),
)
pg_conn.commit()
try:
team_id = seeded_data["team_id"]
resp_t1 = client.get(
f"/api/v2/teams/{team_id}/evolutions?tier=1", headers=AUTH_HEADER
)
assert resp_t1.status_code == 200
assert resp_t1.json()["count"] == 0
resp_t2 = client.get(
f"/api/v2/teams/{team_id}/evolutions?tier=2", headers=AUTH_HEADER
)
assert resp_t2.status_code == 200
t2_data = resp_t2.json()
assert t2_data["count"] == 1
assert t2_data["items"][0]["current_tier"] == 2
finally:
cur.execute(
"UPDATE evolution_card_state SET current_tier = 1 WHERE id = %s",
(seeded_data["state_id"],),
)
pg_conn.commit()
@_skip_no_pg
def test_list_pagination(client, seeded_data, pg_conn):
"""page/per_page params slice the full result set correctly.
Temporarily inserts a second state (for player2 on the same team) so
the list has 2 items. With per_page=1, page=1 returns item 1 and
page=2 returns item 2; they must be different players.
"""
cur = pg_conn.cursor()
cur.execute(
"""
INSERT INTO evolution_card_state
(player_id, team_id, track_id, current_tier, current_value, fully_evolved)
VALUES (%s, %s, %s, 0, 0.0, false)
RETURNING id
""",
(
seeded_data["player2_id"],
seeded_data["team_id"],
seeded_data["batter_track_id"],
),
)
extra_state_id = cur.fetchone()[0]
pg_conn.commit()
try:
team_id = seeded_data["team_id"]
resp1 = client.get(
f"/api/v2/teams/{team_id}/evolutions?page=1&per_page=1", headers=AUTH_HEADER
)
assert resp1.status_code == 200
data1 = resp1.json()
assert len(data1["items"]) == 1
resp2 = client.get(
f"/api/v2/teams/{team_id}/evolutions?page=2&per_page=1", headers=AUTH_HEADER
)
assert resp2.status_code == 200
data2 = resp2.json()
assert len(data2["items"]) == 1
assert data1["items"][0]["player_id"] != data2["items"][0]["player_id"]
finally:
cur.execute("DELETE FROM evolution_card_state WHERE id = %s", (extra_state_id,))
pg_conn.commit()
# ---------------------------------------------------------------------------
# Tests: GET /api/v2/evolution/cards/{card_id}
# ---------------------------------------------------------------------------
@_skip_no_pg
def test_get_card_state_shape(client, seeded_data):
"""GET /evolution/cards/{card_id} returns all required fields.
Verifies the full response envelope:
player_id, team_id, current_tier, current_value, fully_evolved,
last_evaluated_at, next_threshold, and a nested 'track' dict
with id, name, card_type, formula, and t1-t4 thresholds.
"""
card_id = seeded_data["card_id"]
resp = client.get(f"/api/v2/evolution/cards/{card_id}", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["player_id"] == seeded_data["player_id"]
assert data["team_id"] == seeded_data["team_id"]
assert data["current_tier"] == 1
assert data["current_value"] == 87.5
assert data["fully_evolved"] is False
t = data["track"]
assert t["id"] == seeded_data["batter_track_id"]
assert t["name"] == "WP07 Batter Track"
assert t["card_type"] == "batter"
assert t["formula"] == "pa + tb * 2"
assert t["t1_threshold"] == 37
assert t["t2_threshold"] == 149
assert t["t3_threshold"] == 448
assert t["t4_threshold"] == 896
# tier=1 -> next threshold is t2_threshold
assert data["next_threshold"] == 149
@_skip_no_pg
def test_get_card_state_next_threshold(client, seeded_data, pg_conn):
"""next_threshold reflects the threshold for the tier immediately above current.
Tier mapping:
0 -> t1_threshold (37)
1 -> t2_threshold (149)
2 -> t3_threshold (448)
3 -> t4_threshold (896)
4 -> null (fully evolved)
This test advances the state to tier=2, confirms next_threshold=448,
then to tier=4 (fully_evolved=True) and confirms next_threshold=null.
Restores original state after assertions.
"""
cur = pg_conn.cursor()
card_id = seeded_data["card_id"]
state_id = seeded_data["state_id"]
# Advance to tier 2
cur.execute(
"UPDATE evolution_card_state SET current_tier = 2 WHERE id = %s", (state_id,)
)
pg_conn.commit()
try:
resp = client.get(f"/api/v2/evolution/cards/{card_id}", headers=AUTH_HEADER)
assert resp.status_code == 200
assert resp.json()["next_threshold"] == 448 # t3_threshold
# Advance to tier 4 (fully evolved)
cur.execute(
"UPDATE evolution_card_state SET current_tier = 4, fully_evolved = true "
"WHERE id = %s",
(state_id,),
)
pg_conn.commit()
resp2 = client.get(f"/api/v2/evolution/cards/{card_id}", headers=AUTH_HEADER)
assert resp2.status_code == 200
assert resp2.json()["next_threshold"] is None
finally:
cur.execute(
"UPDATE evolution_card_state SET current_tier = 1, fully_evolved = false "
"WHERE id = %s",
(state_id,),
)
pg_conn.commit()
@_skip_no_pg
def test_get_card_id_resolves_player(client, seeded_data):
"""card_id is resolved via the Card table to obtain (player_id, team_id).
The endpoint must JOIN Card -> Player + Team to find the EvolutionCardState.
Verifies that card_id correctly maps to the right player's evolution state.
"""
card_id = seeded_data["card_id"]
resp = client.get(f"/api/v2/evolution/cards/{card_id}", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["player_id"] == seeded_data["player_id"]
assert data["team_id"] == seeded_data["team_id"]
@_skip_no_pg
def test_get_card_404_no_state(client, seeded_data):
"""GET /evolution/cards/{card_id} returns 404 when no EvolutionCardState exists.
card_no_state_id is a card row for player2 on the team, but no
evolution_card_state row was created for player2. The endpoint must
return 404, not 500 or an empty response.
"""
card_id = seeded_data["card_no_state_id"]
resp = client.get(f"/api/v2/evolution/cards/{card_id}", headers=AUTH_HEADER)
assert resp.status_code == 404
@_skip_no_pg
def test_duplicate_cards_share_state(client, seeded_data):
"""Two Card rows for the same player+team share one EvolutionCardState.
card_id and card2_id both belong to player_id on team_id. Because the
unique-(player,team) constraint means only one state row can exist, both
card IDs must resolve to the same state data.
"""
card1_id = seeded_data["card_id"]
card2_id = seeded_data["card2_id"]
resp1 = client.get(f"/api/v2/evolution/cards/{card1_id}", headers=AUTH_HEADER)
resp2 = client.get(f"/api/v2/evolution/cards/{card2_id}", headers=AUTH_HEADER)
assert resp1.status_code == 200
assert resp2.status_code == 200
data1 = resp1.json()
data2 = resp2.json()
assert data1["player_id"] == data2["player_id"] == seeded_data["player_id"]
assert data1["current_tier"] == data2["current_tier"] == 1
assert data1["current_value"] == data2["current_value"] == 87.5
# ---------------------------------------------------------------------------
# Auth tests
# ---------------------------------------------------------------------------
@_skip_no_pg
def test_auth_required(client, seeded_data):
"""Both endpoints return 401 when no Bearer token is provided.
Verifies that the valid_token dependency is enforced on:
GET /api/v2/teams/{id}/evolutions
GET /api/v2/evolution/cards/{id}
"""
team_id = seeded_data["team_id"]
card_id = seeded_data["card_id"]
resp_list = client.get(f"/api/v2/teams/{team_id}/evolutions")
assert resp_list.status_code == 401
resp_card = client.get(f"/api/v2/evolution/cards/{card_id}")
assert resp_card.status_code == 401

View File

@ -1,132 +0,0 @@
"""Integration tests for the evolution track catalog API endpoints (WP-06).
Tests cover:
GET /api/v2/evolution/tracks
GET /api/v2/evolution/tracks/{track_id}
All tests require a live PostgreSQL connection (POSTGRES_HOST env var) and
assume the evolution schema migration (WP-04) has already been applied.
Tests auto-skip when POSTGRES_HOST is not set.
Test data is inserted via psycopg2 before the test module runs and deleted
afterwards so the tests are repeatable. ON CONFLICT keeps the table clean
even if a previous run did not complete teardown.
"""
import os
import pytest
from fastapi.testclient import TestClient
POSTGRES_HOST = os.environ.get("POSTGRES_HOST")
_skip_no_pg = pytest.mark.skipif(
not POSTGRES_HOST, reason="POSTGRES_HOST not set — integration tests skipped"
)
AUTH_HEADER = {"Authorization": f"Bearer {os.environ.get('API_TOKEN', 'test-token')}"}
_SEED_TRACKS = [
("Batter", "batter", "pa+tb*2", 37, 149, 448, 896),
("Starting Pitcher", "sp", "ip+k", 10, 40, 120, 240),
("Relief Pitcher", "rp", "ip+k", 3, 12, 35, 70),
]
@pytest.fixture(scope="module")
def seeded_tracks(pg_conn):
"""Insert three canonical evolution tracks; remove them after the module.
Uses ON CONFLICT DO UPDATE so the fixture is safe to run even if rows
already exist from a prior test run that did not clean up. Returns the
list of row IDs that were upserted.
"""
cur = pg_conn.cursor()
ids = []
for name, card_type, formula, t1, t2, t3, t4 in _SEED_TRACKS:
cur.execute(
"""
INSERT INTO evolution_track
(name, card_type, formula, t1_threshold, t2_threshold, t3_threshold, t4_threshold)
VALUES (%s, %s, %s, %s, %s, %s, %s)
ON CONFLICT (card_type) DO UPDATE SET
name = EXCLUDED.name,
formula = EXCLUDED.formula,
t1_threshold = EXCLUDED.t1_threshold,
t2_threshold = EXCLUDED.t2_threshold,
t3_threshold = EXCLUDED.t3_threshold,
t4_threshold = EXCLUDED.t4_threshold
RETURNING id
""",
(name, card_type, formula, t1, t2, t3, t4),
)
ids.append(cur.fetchone()[0])
pg_conn.commit()
yield ids
cur.execute("DELETE FROM evolution_track WHERE id = ANY(%s)", (ids,))
pg_conn.commit()
@pytest.fixture(scope="module")
def client():
"""FastAPI TestClient backed by the real PostgreSQL database."""
from app.main import app
with TestClient(app) as c:
yield c
@_skip_no_pg
def test_list_tracks_returns_count_3(client, seeded_tracks):
"""GET /tracks returns all three tracks with count=3.
After seeding batter/sp/rp, the table should have exactly those three
rows (no other tracks are inserted by other test modules).
"""
resp = client.get("/api/v2/evolution/tracks", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["count"] == 3
assert len(data["items"]) == 3
@_skip_no_pg
def test_filter_by_card_type(client, seeded_tracks):
"""card_type=sp filter returns exactly 1 track with card_type 'sp'."""
resp = client.get("/api/v2/evolution/tracks?card_type=sp", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["count"] == 1
assert data["items"][0]["card_type"] == "sp"
@_skip_no_pg
def test_get_single_track_with_thresholds(client, seeded_tracks):
"""GET /tracks/{id} returns a track dict with formula and t1-t4 thresholds."""
track_id = seeded_tracks[0] # batter
resp = client.get(f"/api/v2/evolution/tracks/{track_id}", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["card_type"] == "batter"
assert data["formula"] == "pa+tb*2"
for key in ("t1_threshold", "t2_threshold", "t3_threshold", "t4_threshold"):
assert key in data, f"Missing field: {key}"
assert data["t1_threshold"] == 37
assert data["t4_threshold"] == 896
@_skip_no_pg
def test_404_for_nonexistent_track(client, seeded_tracks):
"""GET /tracks/999999 returns 404 when the track does not exist."""
resp = client.get("/api/v2/evolution/tracks/999999", headers=AUTH_HEADER)
assert resp.status_code == 404
@_skip_no_pg
def test_auth_required(client, seeded_tracks):
"""Requests without a Bearer token return 401 for both endpoints."""
resp_list = client.get("/api/v2/evolution/tracks")
assert resp_list.status_code == 401
track_id = seeded_tracks[0]
resp_single = client.get(f"/api/v2/evolution/tracks/{track_id}")
assert resp_single.status_code == 401

View File

@ -3,7 +3,7 @@
Unit tests only no database required. Stats inputs are simple namespace
objects whose attributes match what BattingSeasonStats/PitchingSeasonStats expose.
Tier thresholds used (from evolution_tracks.json seed data):
Tier thresholds used (from refractor_tracks.json seed data):
Batter: t1=37, t2=149, t3=448, t4=896
SP: t1=10, t2=40, t3=120, t4=240
RP: t1=3, t2=12, t3=35, t4=70
@ -204,3 +204,120 @@ def test_tier_t3_boundary():
def test_tier_accepts_namespace_track():
"""tier_from_value must work with attribute-style track objects (Peewee models)."""
assert tier_from_value(37, track_ns("batter")) == 1
# ---------------------------------------------------------------------------
# T1-1: Negative singles guard in compute_batter_value
# ---------------------------------------------------------------------------
def test_batter_negative_singles_component():
"""hits=1, doubles=1, triples=1, hr=0 produces singles=-1.
What: The formula computes singles = hits - doubles - triples - hr.
With hits=1, doubles=1, triples=1, hr=0 the result is singles = -1,
which is a physically impossible stat line but valid arithmetic input.
Why: Document the formula's actual behaviour when given an incoherent stat
line so that callers are aware that no clamping or guard exists. If a
guard is added in the future, this test will catch the change in behaviour.
singles = 1 - 1 - 1 - 0 = -1
tb = (-1)*1 + 1*2 + 1*3 + 0*4 = -1 + 2 + 3 = 4
value = pa + tb*2 = 0 + 4*2 = 8
"""
stats = batter_stats(hits=1, doubles=1, triples=1, hr=0)
# singles will be -1; the formula does NOT clamp, so TB = 4 and value = 8.0
result = compute_batter_value(stats)
assert result == 8.0, (
f"Expected 8.0 (negative singles flows through unclamped), got {result}"
)
def test_batter_negative_singles_is_not_clamped():
"""A singles value below zero is NOT clamped to zero by the formula.
What: Confirms that singles < 0 propagates into TB rather than being
floored at 0. If clamping were added, tb would be 0*1 + 1*2 + 1*3 = 5
and value would be 10.0, not 8.0.
Why: Guards future refactors if someone adds `singles = max(0, ...)`,
this assertion will fail immediately, surfacing the behaviour change.
"""
stats = batter_stats(hits=1, doubles=1, triples=1, hr=0)
unclamped_value = compute_batter_value(stats)
# If singles were clamped to 0: tb = 0+2+3 = 5, value = 10.0
clamped_value = 10.0
assert unclamped_value != clamped_value, (
"Formula appears to clamp negative singles — behaviour has changed"
)
# ---------------------------------------------------------------------------
# T1-2: Tier boundary precision with float SP values
# ---------------------------------------------------------------------------
def test_sp_tier_just_below_t1_outs29():
"""SP with outs=29 produces IP=9.666..., which is below T1 threshold (10) → T0.
What: 29 outs / 3 = 9.6666... IP + 0 K = 9.6666... value.
The SP T1 threshold is 10.0, so this value is strictly below T1.
Why: Floating-point IP values accumulate slowly for pitchers. A bug that
truncated or rounded IP upward could cause premature tier advancement.
Verify that tier_from_value uses a >= comparison (not >) and handles
non-integer values correctly.
"""
stats = pitcher_stats(outs=29, strikeouts=0)
value = compute_sp_value(stats)
assert value == pytest.approx(29 / 3) # 9.6666...
assert value < 10.0 # strictly below T1
assert tier_from_value(value, track_dict("sp")) == 0
def test_sp_tier_exactly_t1_outs30():
"""SP with outs=30 produces IP=10.0, exactly at T1 threshold → T1.
What: 30 outs / 3 = 10.0 IP + 0 K = 10.0 value.
The SP T1 threshold is 10.0, so value == t1 satisfies the >= condition.
Why: Off-by-one or strictly-greater-than comparisons would classify
this as T0 instead of T1. The boundary value must correctly promote
to the matching tier.
"""
stats = pitcher_stats(outs=30, strikeouts=0)
value = compute_sp_value(stats)
assert value == 10.0
assert tier_from_value(value, track_dict("sp")) == 1
def test_sp_float_value_at_exact_t2_boundary():
"""SP value exactly at T2 threshold (40.0) → T2.
What: outs=120 -> IP=40.0, strikeouts=0 -> value=40.0.
T2 threshold for SP is 40. The >= comparison must promote to T2.
Why: Validates that all four tier thresholds use inclusive lower-bound
comparisons for float values, not just T1.
"""
stats = pitcher_stats(outs=120, strikeouts=0)
value = compute_sp_value(stats)
assert value == 40.0
assert tier_from_value(value, track_dict("sp")) == 2
def test_sp_float_value_just_below_t2():
"""SP value just below T2 (39.999...) stays at T1.
What: outs=119 -> IP=39.6666..., strikeouts=0 -> value=39.666...
This is strictly less than T2=40, so tier should be 1 (already past T1=10).
Why: Confirms that sub-threshold float values are not prematurely promoted
due to floating-point comparison imprecision.
"""
stats = pitcher_stats(outs=119, strikeouts=0)
value = compute_sp_value(stats)
assert value == pytest.approx(119 / 3) # 39.666...
assert value < 40.0
assert tier_from_value(value, track_dict("sp")) == 1

View File

@ -1,667 +0,0 @@
"""Integration tests for WP-13: Post-Game Callback Integration.
Tests cover both post-game callback endpoints:
POST /api/v2/season-stats/update-game/{game_id}
POST /api/v2/evolution/evaluate-game/{game_id}
All tests run against a named shared-memory SQLite database so that Peewee
model queries inside the route handlers (which execute in the TestClient's
thread) and test fixture setup/assertions (which execute in the pytest thread)
use the same underlying database connection. This is necessary because
SQLite :memory: databases are per-connection a new thread gets a new empty
database unless a shared-cache URI is used.
The WP-13 tests therefore manage their own database fixture (_wp13_db) and do
not use the conftest autouse setup_test_db. The module-level setup_wp13_db
fixture creates tables before each test and drops them after.
The season_stats service 'db' reference is patched at module level so that
db.atomic() inside update_season_stats() operates on _wp13_db.
Test matrix:
test_update_game_creates_season_stats_rows
POST to update-game, assert player_season_stats rows are created.
test_update_game_response_shape
Response contains {"updated": N, "skipped": false}.
test_update_game_idempotent
Second POST to same game_id returns skipped=true, stats unchanged.
test_evaluate_game_increases_current_value
After update-game, POST to evaluate-game, assert current_value > 0.
test_evaluate_game_tier_advancement
Set up card near tier threshold, game pushes past it, assert tier advanced.
test_evaluate_game_no_tier_advancement
Player accumulates too few stats tier stays at 0.
test_evaluate_game_tier_ups_in_response
Tier-up appears in tier_ups list with correct fields.
test_evaluate_game_skips_players_without_state
Players in game but without EvolutionCardState are silently skipped.
test_auth_required_update_game
Missing bearer token returns 401 on update-game.
test_auth_required_evaluate_game
Missing bearer token returns 401 on evaluate-game.
"""
import os
# Set API_TOKEN before any app imports so that app.dependencies.AUTH_TOKEN
# is initialised to the same value as our test bearer token.
os.environ.setdefault("API_TOKEN", "test-token")
import app.services.season_stats as _season_stats_module
import pytest
from fastapi import FastAPI, Request
from fastapi.testclient import TestClient
from peewee import SqliteDatabase
from app.db_engine import (
Cardset,
EvolutionCardState,
EvolutionCosmetic,
EvolutionTierBoost,
EvolutionTrack,
MlbPlayer,
Pack,
PackType,
Player,
BattingSeasonStats,
PitchingSeasonStats,
ProcessedGame,
Rarity,
Roster,
RosterSlot,
ScoutClaim,
ScoutOpportunity,
StratGame,
StratPlay,
Decision,
Team,
Card,
Event,
)
# ---------------------------------------------------------------------------
# Shared-memory SQLite database for WP-13 tests.
# A named shared-memory URI allows multiple connections (and therefore
# multiple threads) to share the same in-memory database, which is required
# because TestClient routes run in a different thread than pytest fixtures.
# ---------------------------------------------------------------------------
_wp13_db = SqliteDatabase(
"file:wp13test?mode=memory&cache=shared",
uri=True,
pragmas={"foreign_keys": 1},
)
_WP13_MODELS = [
Rarity,
Event,
Cardset,
MlbPlayer,
Player,
Team,
PackType,
Pack,
Card,
Roster,
RosterSlot,
StratGame,
StratPlay,
Decision,
ScoutOpportunity,
ScoutClaim,
BattingSeasonStats,
PitchingSeasonStats,
ProcessedGame,
EvolutionTrack,
EvolutionCardState,
EvolutionTierBoost,
EvolutionCosmetic,
]
# Patch the service-layer 'db' reference to use our shared test database so
# that db.atomic() in update_season_stats() operates on the same connection.
_season_stats_module.db = _wp13_db
# ---------------------------------------------------------------------------
# Auth header used by every authenticated request
# ---------------------------------------------------------------------------
AUTH_HEADER = {"Authorization": "Bearer test-token"}
# ---------------------------------------------------------------------------
# Database fixture — binds all models to _wp13_db and creates/drops tables
# ---------------------------------------------------------------------------
@pytest.fixture(autouse=True)
def setup_wp13_db():
"""Bind WP-13 models to the shared-memory SQLite db and create tables.
autouse=True so every test in this module automatically gets a fresh
schema. Tables are dropped in reverse dependency order after each test.
This fixture replaces (and disables) the conftest autouse setup_test_db
for tests in this module because we need a different database backend
(shared-cache URI rather than :memory:) to support multi-thread access
via TestClient.
"""
_wp13_db.bind(_WP13_MODELS)
_wp13_db.connect(reuse_if_open=True)
_wp13_db.create_tables(_WP13_MODELS)
yield _wp13_db
_wp13_db.drop_tables(list(reversed(_WP13_MODELS)), safe=True)
# ---------------------------------------------------------------------------
# Slim test app — only mounts the two routers under test.
# A db_middleware ensures the shared-cache connection is open for each request.
# ---------------------------------------------------------------------------
def _build_test_app() -> FastAPI:
"""Build a minimal FastAPI instance with just the WP-13 routers.
A db_middleware calls _wp13_db.connect(reuse_if_open=True) before each
request so that the route handler thread can use the shared-memory SQLite
connection even though it runs in a different thread from the fixture.
"""
from app.routers_v2.season_stats import router as ss_router
from app.routers_v2.evolution import router as evo_router
test_app = FastAPI()
@test_app.middleware("http")
async def db_middleware(request: Request, call_next):
_wp13_db.connect(reuse_if_open=True)
return await call_next(request)
test_app.include_router(ss_router)
test_app.include_router(evo_router)
return test_app
# ---------------------------------------------------------------------------
# TestClient fixture — function-scoped so it uses the per-test db binding.
# ---------------------------------------------------------------------------
@pytest.fixture
def client(setup_wp13_db):
"""FastAPI TestClient backed by the slim test app and shared-memory SQLite."""
with TestClient(_build_test_app()) as c:
yield c
# ---------------------------------------------------------------------------
# Shared helper factories (mirrors test_season_stats_update.py style)
# ---------------------------------------------------------------------------
def _make_cardset():
cs, _ = Cardset.get_or_create(
name="WP13 Test Set",
defaults={"description": "wp13 cardset", "total_cards": 100},
)
return cs
def _make_rarity():
r, _ = Rarity.get_or_create(value=1, name="Common", defaults={"color": "#ffffff"})
return r
def _make_player(name: str, pos: str = "1B") -> Player:
return Player.create(
p_name=name,
rarity=_make_rarity(),
cardset=_make_cardset(),
set_num=1,
pos_1=pos,
image="https://example.com/img.png",
mlbclub="TST",
franchise="TST",
description=f"wp13 test: {name}",
)
def _make_team(abbrev: str, gmid: int) -> Team:
return Team.create(
abbrev=abbrev,
sname=abbrev,
lname=f"Team {abbrev}",
gmid=gmid,
gmname=f"gm_{abbrev.lower()}",
gsheet="https://docs.google.com/spreadsheets/wp13",
wallet=500,
team_value=1000,
collection_value=1000,
season=11,
is_ai=False,
)
def _make_game(team_a, team_b) -> StratGame:
return StratGame.create(
season=11,
game_type="ranked",
away_team=team_a,
home_team=team_b,
)
def _make_play(game, play_num, batter, batter_team, pitcher, pitcher_team, **stats):
"""Create a StratPlay with sensible zero-defaults for all stat columns."""
defaults = dict(
on_base_code="000",
inning_half="top",
inning_num=1,
batting_order=1,
starting_outs=0,
away_score=0,
home_score=0,
pa=0,
ab=0,
hit=0,
run=0,
double=0,
triple=0,
homerun=0,
bb=0,
so=0,
hbp=0,
rbi=0,
sb=0,
cs=0,
outs=0,
sac=0,
ibb=0,
gidp=0,
bphr=0,
bpfo=0,
bp1b=0,
bplo=0,
)
defaults.update(stats)
return StratPlay.create(
game=game,
play_num=play_num,
batter=batter,
batter_team=batter_team,
pitcher=pitcher,
pitcher_team=pitcher_team,
**defaults,
)
def _make_track(
name: str = "WP13 Batter Track", card_type: str = "batter"
) -> EvolutionTrack:
track, _ = EvolutionTrack.get_or_create(
name=name,
defaults=dict(
card_type=card_type,
formula="pa + tb * 2",
t1_threshold=37,
t2_threshold=149,
t3_threshold=448,
t4_threshold=896,
),
)
return track
def _make_state(
player, team, track, current_tier=0, current_value=0.0
) -> EvolutionCardState:
return EvolutionCardState.create(
player=player,
team=team,
track=track,
current_tier=current_tier,
current_value=current_value,
fully_evolved=False,
last_evaluated_at=None,
)
# ---------------------------------------------------------------------------
# Tests: POST /api/v2/season-stats/update-game/{game_id}
# ---------------------------------------------------------------------------
def test_update_game_creates_season_stats_rows(client):
"""POST update-game creates player_season_stats rows for players in the game.
What: Set up a batter and pitcher in a game with 3 PA for the batter.
After the endpoint call, assert a BattingSeasonStats row exists with pa=3.
Why: This is the core write path. If the row is not created, the
evolution evaluator will always see zero career stats.
"""
team_a = _make_team("WU1", gmid=20001)
team_b = _make_team("WU2", gmid=20002)
batter = _make_player("WP13 Batter A")
pitcher = _make_player("WP13 Pitcher A", pos="SP")
game = _make_game(team_a, team_b)
for i in range(3):
_make_play(game, i + 1, batter, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
resp = client.post(
f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER
)
assert resp.status_code == 200
stats = BattingSeasonStats.get_or_none(
(BattingSeasonStats.player == batter)
& (BattingSeasonStats.team == team_a)
& (BattingSeasonStats.season == 11)
)
assert stats is not None
assert stats.pa == 3
def test_update_game_response_shape(client):
"""POST update-game returns {"updated": N, "skipped": false}.
What: A game with one batter and one pitcher produces updated >= 1 and
skipped is false on the first call.
Why: The bot relies on 'updated' to log how many rows were touched and
'skipped' to detect re-delivery.
"""
team_a = _make_team("WS1", gmid=20011)
team_b = _make_team("WS2", gmid=20012)
batter = _make_player("WP13 Batter S")
pitcher = _make_player("WP13 Pitcher S", pos="SP")
game = _make_game(team_a, team_b)
_make_play(game, 1, batter, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
resp = client.post(
f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER
)
assert resp.status_code == 200
data = resp.json()
assert "updated" in data
assert data["updated"] >= 1
assert data["skipped"] is False
def test_update_game_idempotent(client):
"""Calling update-game twice for the same game returns skipped=true on second call.
What: Process a game once (pa=3), then call the endpoint again with the
same game_id. The second response must have skipped=true and updated=0,
and pa in the DB must still be 3 (not 6).
Why: The bot infrastructure may deliver game-complete events more than
once. Double-counting would corrupt all evolution stats downstream.
"""
team_a = _make_team("WI1", gmid=20021)
team_b = _make_team("WI2", gmid=20022)
batter = _make_player("WP13 Batter I")
pitcher = _make_player("WP13 Pitcher I", pos="SP")
game = _make_game(team_a, team_b)
for i in range(3):
_make_play(game, i + 1, batter, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
resp1 = client.post(
f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER
)
assert resp1.status_code == 200
assert resp1.json()["skipped"] is False
resp2 = client.post(
f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER
)
assert resp2.status_code == 200
data2 = resp2.json()
assert data2["skipped"] is True
assert data2["updated"] == 0
stats = BattingSeasonStats.get(
(BattingSeasonStats.player == batter) & (BattingSeasonStats.team == team_a)
)
assert stats.pa == 3 # not 6
# ---------------------------------------------------------------------------
# Tests: POST /api/v2/evolution/evaluate-game/{game_id}
# ---------------------------------------------------------------------------
def test_evaluate_game_increases_current_value(client):
"""After update-game, evaluate-game raises the card's current_value above 0.
What: Batter with an EvolutionCardState gets 3 hits (pa=3, hit=3) from a
game. update-game writes those stats; evaluate-game then recomputes the
value. current_value in the DB must be > 0 after the evaluate call.
Why: This is the end-to-end path: stats in -> evaluate -> value updated.
If current_value stays 0, the card will never advance regardless of how
many games are played.
"""
team_a = _make_team("WE1", gmid=20031)
team_b = _make_team("WE2", gmid=20032)
batter = _make_player("WP13 Batter E")
pitcher = _make_player("WP13 Pitcher E", pos="SP")
game = _make_game(team_a, team_b)
track = _make_track()
_make_state(batter, team_a, track)
for i in range(3):
_make_play(
game, i + 1, batter, team_a, pitcher, team_b, pa=1, ab=1, hit=1, outs=0
)
client.post(f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER)
resp = client.post(
f"/api/v2/evolution/evaluate-game/{game.id}", headers=AUTH_HEADER
)
assert resp.status_code == 200
state = EvolutionCardState.get(
(EvolutionCardState.player == batter) & (EvolutionCardState.team == team_a)
)
assert state.current_value > 0
def test_evaluate_game_tier_advancement(client):
"""A game that pushes a card past a tier threshold advances the tier.
What: Set the batter's career value just below T1 (37) by manually seeding
a prior BattingSeasonStats row with pa=34. Then add a game that brings the
total past 37 and call evaluate-game. current_tier must advance to >= 1.
Why: Tier advancement is the core deliverable of card evolution. If the
threshold comparison is off-by-one or the tier is never written, the card
will never visually evolve.
"""
team_a = _make_team("WT1", gmid=20041)
team_b = _make_team("WT2", gmid=20042)
batter = _make_player("WP13 Batter T")
pitcher = _make_player("WP13 Pitcher T", pos="SP")
game = _make_game(team_a, team_b)
track = _make_track(name="WP13 Tier Adv Track")
_make_state(batter, team_a, track, current_tier=0, current_value=34.0)
# Seed prior stats: 34 PA (value = 34; T1 threshold = 37)
BattingSeasonStats.create(
player=batter,
team=team_a,
season=10, # previous season
pa=34,
)
# Game adds 4 more PA (total pa=38 > T1=37)
for i in range(4):
_make_play(game, i + 1, batter, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
client.post(f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER)
resp = client.post(
f"/api/v2/evolution/evaluate-game/{game.id}", headers=AUTH_HEADER
)
assert resp.status_code == 200
updated_state = EvolutionCardState.get(
(EvolutionCardState.player == batter) & (EvolutionCardState.team == team_a)
)
assert updated_state.current_tier >= 1
def test_evaluate_game_no_tier_advancement(client):
"""A game with insufficient stats does not advance the tier.
What: A batter starts at tier=0 with current_value=0. The game adds only
2 PA (value=2 which is < T1 threshold of 37). After evaluate-game the
tier must still be 0.
Why: We need to confirm the threshold guard works correctly cards should
not advance prematurely before earning the required stats.
"""
team_a = _make_team("WN1", gmid=20051)
team_b = _make_team("WN2", gmid=20052)
batter = _make_player("WP13 Batter N")
pitcher = _make_player("WP13 Pitcher N", pos="SP")
game = _make_game(team_a, team_b)
track = _make_track(name="WP13 No-Adv Track")
_make_state(batter, team_a, track, current_tier=0)
# Only 2 PA — far below T1=37
for i in range(2):
_make_play(game, i + 1, batter, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
client.post(f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER)
resp = client.post(
f"/api/v2/evolution/evaluate-game/{game.id}", headers=AUTH_HEADER
)
assert resp.status_code == 200
data = resp.json()
assert data["tier_ups"] == []
state = EvolutionCardState.get(
(EvolutionCardState.player == batter) & (EvolutionCardState.team == team_a)
)
assert state.current_tier == 0
def test_evaluate_game_tier_ups_in_response(client):
"""evaluate-game response includes a tier_ups entry when a player advances.
What: Seed a batter at tier=0 with pa=34 (just below T1=37). A game adds
4 PA pushing total to 38. The response tier_ups list must contain one
entry with the correct fields: player_id, team_id, player_name, old_tier,
new_tier, current_value, track_name.
Why: The bot uses tier_ups to trigger in-game notifications and visual card
upgrade animations. A missing or malformed entry would silently skip the
announcement.
"""
team_a = _make_team("WR1", gmid=20061)
team_b = _make_team("WR2", gmid=20062)
batter = _make_player("WP13 Batter R")
pitcher = _make_player("WP13 Pitcher R", pos="SP")
game = _make_game(team_a, team_b)
track = _make_track(name="WP13 Tier-Ups Track")
_make_state(batter, team_a, track, current_tier=0)
# Seed prior stats below threshold
BattingSeasonStats.create(player=batter, team=team_a, season=10, pa=34)
# Game pushes past T1
for i in range(4):
_make_play(game, i + 1, batter, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
client.post(f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER)
resp = client.post(
f"/api/v2/evolution/evaluate-game/{game.id}", headers=AUTH_HEADER
)
assert resp.status_code == 200
data = resp.json()
assert data["evaluated"] >= 1
assert len(data["tier_ups"]) == 1
tu = data["tier_ups"][0]
assert tu["player_id"] == batter.player_id
assert tu["team_id"] == team_a.id
assert tu["player_name"] == "WP13 Batter R"
assert tu["old_tier"] == 0
assert tu["new_tier"] >= 1
assert tu["current_value"] > 0
assert tu["track_name"] == "WP13 Tier-Ups Track"
def test_evaluate_game_skips_players_without_state(client):
"""Players in a game without an EvolutionCardState are silently skipped.
What: A game has two players: one with a card state and one without.
After evaluate-game, evaluated should be 1 (only the player with state)
and the endpoint must return 200 without errors.
Why: Not every player on a roster will have started their evolution journey.
A hard 404 or 500 for missing states would break the entire batch.
"""
team_a = _make_team("WK1", gmid=20071)
team_b = _make_team("WK2", gmid=20072)
batter_with_state = _make_player("WP13 Batter WithState")
batter_no_state = _make_player("WP13 Batter NoState")
pitcher = _make_player("WP13 Pitcher K", pos="SP")
game = _make_game(team_a, team_b)
track = _make_track(name="WP13 Skip Track")
# Only batter_with_state gets an EvolutionCardState
_make_state(batter_with_state, team_a, track)
_make_play(game, 1, batter_with_state, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
_make_play(game, 2, batter_no_state, team_a, pitcher, team_b, pa=1, ab=1, outs=1)
client.post(f"/api/v2/season-stats/update-game/{game.id}", headers=AUTH_HEADER)
resp = client.post(
f"/api/v2/evolution/evaluate-game/{game.id}", headers=AUTH_HEADER
)
assert resp.status_code == 200
data = resp.json()
# Only 1 evaluation (the player with a state)
assert data["evaluated"] == 1
# ---------------------------------------------------------------------------
# Tests: Auth required on both endpoints
# ---------------------------------------------------------------------------
def test_auth_required_update_game(client):
"""Missing bearer token on update-game returns 401.
What: POST to update-game without any Authorization header.
Why: Both endpoints are production-only callbacks that should never be
accessible without a valid bearer token.
"""
team_a = _make_team("WA1", gmid=20081)
team_b = _make_team("WA2", gmid=20082)
game = _make_game(team_a, team_b)
resp = client.post(f"/api/v2/season-stats/update-game/{game.id}")
assert resp.status_code == 401
def test_auth_required_evaluate_game(client):
"""Missing bearer token on evaluate-game returns 401.
What: POST to evaluate-game without any Authorization header.
Why: Same security requirement as update-game callbacks must be
authenticated to prevent replay attacks and unauthorized stat manipulation.
"""
team_a = _make_team("WB1", gmid=20091)
team_b = _make_team("WB2", gmid=20092)
game = _make_game(team_a, team_b)
resp = client.post(f"/api/v2/evolution/evaluate-game/{game.id}")
assert resp.status_code == 401

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,804 @@
"""Tests for the refractor evaluator service (WP-08).
Unit tests verify tier assignment, advancement, partial progress, idempotency,
full refractor tier, and no-regression behaviour without touching any database,
using stub Peewee models bound to an in-memory SQLite database.
The formula engine (WP-09) and Peewee models (WP-05/WP-07) are not imported
from db_engine/formula_engine; instead the tests supply minimal stubs and
inject them via the _stats_model, _state_model, _compute_value_fn, and
_tier_from_value_fn overrides on evaluate_card().
Stub track thresholds (batter):
T1: 37 T2: 149 T3: 448 T4: 896
Useful reference values:
value=30 T0 (below T1=37)
value=50 T1 (37 <= 50 < 149)
value=100 T1 (stays T1; T2 threshold is 149)
value=160 T2 (149 <= 160 < 448)
value=900 T4 (>= 896) fully_evolved
"""
import pytest
from datetime import datetime
from peewee import (
BooleanField,
CharField,
DateTimeField,
FloatField,
ForeignKeyField,
IntegerField,
Model,
SqliteDatabase,
)
from app.services.refractor_evaluator import evaluate_card
# ---------------------------------------------------------------------------
# Stub models — mirror WP-01/WP-04/WP-07 schema without importing db_engine
# ---------------------------------------------------------------------------
_test_db = SqliteDatabase(":memory:")
class TrackStub(Model):
"""Minimal RefractorTrack stub for evaluator tests."""
card_type = CharField(unique=True)
t1_threshold = IntegerField()
t2_threshold = IntegerField()
t3_threshold = IntegerField()
t4_threshold = IntegerField()
class Meta:
database = _test_db
table_name = "refractor_track"
class CardStateStub(Model):
"""Minimal RefractorCardState stub for evaluator tests."""
player_id = IntegerField()
team_id = IntegerField()
track = ForeignKeyField(TrackStub)
current_tier = IntegerField(default=0)
current_value = FloatField(default=0.0)
fully_evolved = BooleanField(default=False)
last_evaluated_at = DateTimeField(null=True)
class Meta:
database = _test_db
table_name = "refractor_card_state"
indexes = ((("player_id", "team_id"), True),)
class StatsStub(Model):
"""Minimal PlayerSeasonStats stub for evaluator tests."""
player_id = IntegerField()
team_id = IntegerField()
season = IntegerField()
pa = IntegerField(default=0)
hits = IntegerField(default=0)
doubles = IntegerField(default=0)
triples = IntegerField(default=0)
hr = IntegerField(default=0)
outs = IntegerField(default=0)
strikeouts = IntegerField(default=0)
class Meta:
database = _test_db
table_name = "player_season_stats"
# ---------------------------------------------------------------------------
# Formula stubs — avoid importing app.services.formula_engine before WP-09
# ---------------------------------------------------------------------------
def _compute_value(card_type: str, stats) -> float:
"""Stub compute_value_for_track: returns pa for batter, outs/3+k for pitchers."""
if card_type == "batter":
singles = stats.hits - stats.doubles - stats.triples - stats.hr
tb = singles + 2 * stats.doubles + 3 * stats.triples + 4 * stats.hr
return float(stats.pa + tb * 2)
return stats.outs / 3 + stats.strikeouts
def _tier_from_value(value: float, track) -> int:
"""Stub tier_from_value using TrackStub fields t1_threshold/t2_threshold/etc."""
if isinstance(track, dict):
t1, t2, t3, t4 = (
track["t1_threshold"],
track["t2_threshold"],
track["t3_threshold"],
track["t4_threshold"],
)
else:
t1, t2, t3, t4 = (
track.t1_threshold,
track.t2_threshold,
track.t3_threshold,
track.t4_threshold,
)
if value >= t4:
return 4
if value >= t3:
return 3
if value >= t2:
return 2
if value >= t1:
return 1
return 0
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture(autouse=True)
def _db():
"""Create tables before each test and drop them afterwards."""
_test_db.connect(reuse_if_open=True)
_test_db.create_tables([TrackStub, CardStateStub, StatsStub])
yield
_test_db.drop_tables([StatsStub, CardStateStub, TrackStub])
@pytest.fixture()
def batter_track():
return TrackStub.create(
card_type="batter",
t1_threshold=37,
t2_threshold=149,
t3_threshold=448,
t4_threshold=896,
)
@pytest.fixture()
def sp_track():
return TrackStub.create(
card_type="sp",
t1_threshold=10,
t2_threshold=40,
t3_threshold=120,
t4_threshold=240,
)
def _make_state(player_id, team_id, track, current_tier=0, current_value=0.0):
return CardStateStub.create(
player_id=player_id,
team_id=team_id,
track=track,
current_tier=current_tier,
current_value=current_value,
fully_evolved=False,
last_evaluated_at=None,
)
def _make_stats(player_id, team_id, season, **kwargs):
return StatsStub.create(
player_id=player_id, team_id=team_id, season=season, **kwargs
)
def _eval(player_id, team_id, dry_run: bool = False):
return evaluate_card(
player_id,
team_id,
dry_run=dry_run,
_stats_model=StatsStub,
_state_model=CardStateStub,
_compute_value_fn=_compute_value,
_tier_from_value_fn=_tier_from_value,
)
# ---------------------------------------------------------------------------
# Unit tests
# ---------------------------------------------------------------------------
class TestTierAssignment:
"""Tier assigned from computed value against track thresholds."""
def test_value_below_t1_stays_t0(self, batter_track):
"""value=30 is below T1 threshold (37) → tier stays 0."""
_make_state(1, 1, batter_track)
# pa=30, no extra hits → value = 30 + 0 = 30 < 37
_make_stats(1, 1, 1, pa=30)
result = _eval(1, 1)
assert result["current_tier"] == 0
def test_value_at_t1_threshold_assigns_tier_1(self, batter_track):
"""value=50 → T1 (37 <= 50 < 149)."""
_make_state(1, 1, batter_track)
# pa=50, no hits → value = 50 + 0 = 50
_make_stats(1, 1, 1, pa=50)
result = _eval(1, 1)
assert result["current_tier"] == 1
def test_tier_advancement_to_t2(self, batter_track):
"""value=160 → T2 (149 <= 160 < 448)."""
_make_state(1, 1, batter_track)
# pa=160, no hits → value = 160
_make_stats(1, 1, 1, pa=160)
result = _eval(1, 1)
assert result["current_tier"] == 2
def test_partial_progress_stays_t1(self, batter_track):
"""value=100 with T2=149 → stays T1, does not advance to T2."""
_make_state(1, 1, batter_track)
# pa=100 → value = 100, T2 threshold = 149 → tier 1
_make_stats(1, 1, 1, pa=100)
result = _eval(1, 1)
assert result["current_tier"] == 1
assert result["fully_evolved"] is False
def test_fully_evolved_at_t4(self, batter_track):
"""value >= T4 (896) → tier=4 and fully_evolved=True."""
_make_state(1, 1, batter_track)
# pa=900 → value = 900 >= 896
_make_stats(1, 1, 1, pa=900)
result = _eval(1, 1)
assert result["current_tier"] == 4
assert result["fully_evolved"] is True
class TestNoRegression:
"""current_tier never decreases."""
def test_tier_never_decreases(self, batter_track):
"""If current_tier=2 and new value only warrants T1, tier stays 2."""
# Seed state at tier 2
_make_state(1, 1, batter_track, current_tier=2, current_value=160.0)
# Sparse stats: value=50 → would be T1, but current is T2
_make_stats(1, 1, 1, pa=50)
result = _eval(1, 1)
assert result["current_tier"] == 2 # no regression
def test_tier_advances_when_value_improves(self, batter_track):
"""If current_tier=1 and new value warrants T3, tier advances to 3."""
_make_state(1, 1, batter_track, current_tier=1, current_value=50.0)
# pa=500 → value = 500 >= 448 → T3
_make_stats(1, 1, 1, pa=500)
result = _eval(1, 1)
assert result["current_tier"] == 3
class TestIdempotency:
"""Calling evaluate_card twice with same stats returns the same result."""
def test_idempotent_same_result(self, batter_track):
"""Two evaluations with identical stats produce the same tier and value."""
_make_state(1, 1, batter_track)
_make_stats(1, 1, 1, pa=160)
result1 = _eval(1, 1)
result2 = _eval(1, 1)
assert result1["current_tier"] == result2["current_tier"]
assert result1["current_value"] == result2["current_value"]
assert result1["fully_evolved"] == result2["fully_evolved"]
def test_idempotent_at_fully_evolved(self, batter_track):
"""Repeated evaluation at T4 remains fully_evolved=True."""
_make_state(1, 1, batter_track)
_make_stats(1, 1, 1, pa=900)
_eval(1, 1)
result = _eval(1, 1)
assert result["current_tier"] == 4
assert result["fully_evolved"] is True
class TestCareerTotals:
"""Stats are summed across all seasons for the player/team pair."""
def test_multi_season_stats_summed(self, batter_track):
"""Stats from two seasons are aggregated into a single career total."""
_make_state(1, 1, batter_track)
# Season 1: pa=80, Season 2: pa=90 → total pa=170 → value=170 → T2
_make_stats(1, 1, 1, pa=80)
_make_stats(1, 1, 2, pa=90)
result = _eval(1, 1)
assert result["current_tier"] == 2
assert result["current_value"] == 170.0
def test_zero_stats_stays_t0(self, batter_track):
"""No stats rows → all zeros → value=0 → tier=0."""
_make_state(1, 1, batter_track)
result = _eval(1, 1)
assert result["current_tier"] == 0
assert result["current_value"] == 0.0
def test_other_team_stats_not_included(self, batter_track):
"""Stats for the same player on a different team are not counted."""
_make_state(1, 1, batter_track)
_make_stats(1, 1, 1, pa=50)
# Same player, different team — should not count
_make_stats(1, 2, 1, pa=200)
result = _eval(1, 1)
# Only pa=50 counted → value=50 → T1
assert result["current_tier"] == 1
assert result["current_value"] == 50.0
class TestFullyEvolvedPersistence:
"""T2-1: fully_evolved=True is preserved even when stats drop or are absent."""
def test_fully_evolved_persists_when_stats_zeroed(self, batter_track):
"""Card at T4/fully_evolved=True stays fully_evolved after stats are removed.
What: Set up a RefractorCardState at tier=4 with fully_evolved=True.
Then call evaluate_card with no season stats rows (zero career totals).
The evaluator computes value=0 -> new_tier=0, but current_tier must
stay at 4 (no regression) and fully_evolved must remain True.
Why: fully_evolved is a permanent achievement flag it must not be
revoked if a team's stats are rolled back, corrected, or simply not
yet imported. The no-regression rule (max(current, new)) prevents
tier demotion; this test confirms that fully_evolved follows the same
protection.
"""
# Seed state at T4 fully_evolved
_make_state(1, 1, batter_track, current_tier=4, current_value=900.0)
# No stats rows — career totals will be all zeros
# (no _make_stats call)
result = _eval(1, 1)
# The no-regression rule keeps tier at 4
assert result["current_tier"] == 4, (
f"Expected tier=4 (no regression), got {result['current_tier']}"
)
# fully_evolved must still be True since tier >= 4
assert result["fully_evolved"] is True, (
"fully_evolved was reset to False after re-evaluation with zero stats"
)
def test_fully_evolved_persists_with_partial_stats(self, batter_track):
"""Card at T4 stays fully_evolved even with stats below T1.
What: Same setup as above but with a season stats row giving value=30
(below T1=37). The computed tier would be 0, but current_tier must
not regress from 4.
Why: Validates that no-regression applies regardless of whether stats
are zero or merely insufficient for the achieved tier.
"""
_make_state(1, 1, batter_track, current_tier=4, current_value=900.0)
# pa=30 -> value=30, which is below T1=37 -> computed tier=0
_make_stats(1, 1, 1, pa=30)
result = _eval(1, 1)
assert result["current_tier"] == 4
assert result["fully_evolved"] is True
class TestMissingState:
"""ValueError when no card state exists for (player_id, team_id)."""
def test_missing_state_raises(self, batter_track):
"""evaluate_card raises ValueError when no state row exists."""
# No card state created
with pytest.raises(ValueError, match="No refractor_card_state"):
_eval(99, 99)
class TestReturnShape:
"""Return dict has the expected keys and types."""
def test_return_keys(self, batter_track):
"""Result dict contains all expected keys.
Phase 2 addition: 'computed_tier' is included alongside 'current_tier'
so that evaluate-game can detect tier-ups without writing the tier
(dry_run=True path). Both keys must always be present.
"""
_make_state(1, 1, batter_track)
result = _eval(1, 1)
assert set(result.keys()) == {
"player_id",
"team_id",
"current_tier",
"computed_tier",
"computed_fully_evolved",
"current_value",
"fully_evolved",
"last_evaluated_at",
}
def test_last_evaluated_at_is_iso_string(self, batter_track):
"""last_evaluated_at is a non-empty ISO-8601 string."""
_make_state(1, 1, batter_track)
result = _eval(1, 1)
ts = result["last_evaluated_at"]
assert isinstance(ts, str) and len(ts) > 0
# Must be parseable as a datetime
datetime.fromisoformat(ts)
class TestFullyEvolvedFlagCorrection:
"""T3-7: fully_evolved/tier mismatch is corrected by evaluate_card.
A database corruption where fully_evolved=True but current_tier < 4 can
occur if the flag was set incorrectly by a migration or external script.
evaluate_card must re-derive fully_evolved from the freshly-computed tier
(after the no-regression max() is applied), not trust the stored flag.
"""
def test_fully_evolved_flag_corrected_when_tier_below_4(self, batter_track):
"""fully_evolved=True with current_tier=3 is corrected to False after evaluation.
What: Manually set database state to fully_evolved=True, current_tier=3
(a corruption scenario tier 3 cannot be "fully evolved" since T4 is
the maximum tier). Provide stats that compute to a value in the T3
range (value=500, which is >= T3=448 but < T4=896).
After evaluate_card:
- computed value = 500 new_tier = 3
- no-regression: max(current_tier=3, new_tier=3) = 3 tier stays 3
- fully_evolved = (3 >= 4) = False flag is corrected
Why: The evaluator always recomputes fully_evolved from the final
current_tier rather than preserving the stored flag. This ensures
that a corrupted fully_evolved=True at tier<4 is silently repaired
on the next evaluation without requiring a separate migration.
"""
# Inject corruption: fully_evolved=True but tier=3
state = CardStateStub.create(
player_id=1,
team_id=1,
track=batter_track,
current_tier=3,
current_value=500.0,
fully_evolved=True, # intentionally wrong
last_evaluated_at=None,
)
# Stats that compute to value=500: pa=500, no hits → value=500+0=500
# T3 threshold=448, T4 threshold=896 → tier=3, NOT 4
_make_stats(1, 1, 1, pa=500)
result = _eval(1, 1)
assert result["current_tier"] == 3, (
f"Expected tier=3 after evaluation with value=500, got {result['current_tier']}"
)
assert result["fully_evolved"] is False, (
"fully_evolved should have been corrected to False for tier=3, "
f"got {result['fully_evolved']}"
)
# Confirm the database row was updated (not just the return dict)
state_reloaded = CardStateStub.get_by_id(state.id)
assert state_reloaded.fully_evolved is False, (
"fully_evolved was not persisted as False after correction"
)
def test_fully_evolved_flag_preserved_when_tier_reaches_4(self, batter_track):
"""fully_evolved=True with current_tier=3 stays True when new stats push to T4.
What: Same corruption setup as above (fully_evolved=True, tier=3),
but now provide stats with value=900 (>= T4=896).
After evaluate_card:
- computed value = 900 new_tier = 4
- no-regression: max(current_tier=3, new_tier=4) = 4 advances to 4
- fully_evolved = (4 >= 4) = True flag stays True (correctly)
Why: Confirms the evaluator correctly sets fully_evolved=True when
the re-computed tier legitimately reaches T4 regardless of whether
the stored flag was already True before evaluation.
"""
CardStateStub.create(
player_id=1,
team_id=1,
track=batter_track,
current_tier=3,
current_value=500.0,
fully_evolved=True, # stored flag (will be re-derived)
last_evaluated_at=None,
)
# pa=900 → value=900 >= T4=896 → new_tier=4
_make_stats(1, 1, 1, pa=900)
result = _eval(1, 1)
assert result["current_tier"] == 4, (
f"Expected tier=4 for value=900, got {result['current_tier']}"
)
assert result["fully_evolved"] is True, (
f"Expected fully_evolved=True for tier=4, got {result['fully_evolved']}"
)
class TestMultiTeamStatIsolation:
"""T3-8: A player's refractor value is isolated to a specific team's stats.
The evaluator queries BattingSeasonStats WHERE player_id=? AND team_id=?.
When a player has stats on two different teams in the same season, each
team's RefractorCardState must reflect only that team's stats not a
combined total.
"""
def test_multi_team_same_season_stats_isolated(self, batter_track):
"""Each team's refractor value reflects only that team's stats, not combined.
What: Create one player with BattingSeasonStats on team_id=1 (pa=80)
and team_id=2 (pa=120) in the same season. Create a RefractorCardState
for each team. Evaluate each team's card separately and verify:
- Team 1 state: value = 80 tier = T1 (80 >= T1=37, < T2=149)
- Team 2 state: value = 120 tier = T1 (120 >= T1=37, < T2=149)
- Neither value equals the combined total (80+120=200 would be T2)
Why: Confirms the `WHERE player_id=? AND team_id=?` filter in the
evaluator is correctly applied. Without proper team isolation, the
combined total of 200 would cross the T2 threshold (149) and both
states would be incorrectly assigned to T2. This is a critical
correctness requirement: a player traded between teams should have
separate refractor progressions for their time with each franchise.
"""
# Stats on team 1: pa=80 → value=80 (T1: 37<=80<149)
_make_stats(player_id=1, team_id=1, season=11, pa=80)
# Stats on team 2: pa=120 → value=120 (T1: 37<=120<149)
_make_stats(player_id=1, team_id=2, season=11, pa=120)
# combined pa would be 200 → value=200 → T2 (149<=200<448)
# Each team must see only its own stats, not 200
_make_state(player_id=1, team_id=1, track=batter_track)
_make_state(player_id=1, team_id=2, track=batter_track)
result_team1 = _eval(player_id=1, team_id=1)
result_team2 = _eval(player_id=1, team_id=2)
# Team 1: only pa=80 counted → value=80 → T1
assert result_team1["current_value"] == 80.0, (
f"Team 1 value should be 80.0 (its own stats only), "
f"got {result_team1['current_value']}"
)
assert result_team1["current_tier"] == 1, (
f"Team 1 tier should be T1 for value=80, got {result_team1['current_tier']}"
)
# Team 2: only pa=120 counted → value=120 → T1
assert result_team2["current_value"] == 120.0, (
f"Team 2 value should be 120.0 (its own stats only), "
f"got {result_team2['current_value']}"
)
assert result_team2["current_tier"] == 1, (
f"Team 2 tier should be T1 for value=120, got {result_team2['current_tier']}"
)
# Sanity: neither team crossed T2 (which would happen if stats were combined)
assert (
result_team1["current_tier"] != 2 and result_team2["current_tier"] != 2
), (
"At least one team was incorrectly assigned T2 — stats may have been combined"
)
def test_multi_team_different_seasons_isolated(self, batter_track):
"""Stats for the same player across multiple seasons remain per-team isolated.
What: Same player with two seasons of stats for each of two teams:
- team_id=1: season 10 pa=90, season 11 pa=70 combined=160
- team_id=2: season 10 pa=100, season 11 pa=80 combined=180
After evaluation:
- Team 1: value=160 T2 (149<=160<448)
- Team 2: value=180 T2 (149<=180<448)
The test confirms that cross-team season aggregation does not bleed
stats from team 2 into team 1's calculation or vice versa.
Why: Multi-season aggregation and multi-team isolation must work
together. A bug that incorrectly sums all player stats regardless
of team would produce combined values of 340 T2, which coincidentally
passes, but the per-team values and tiers would be wrong.
This test uses values where cross-contamination would produce a
materially different value (340 vs 160/180), catching that class of bug.
"""
# Team 1 stats: total pa=160 → value=160 → T2
_make_stats(player_id=1, team_id=1, season=10, pa=90)
_make_stats(player_id=1, team_id=1, season=11, pa=70)
# Team 2 stats: total pa=180 → value=180 → T2
_make_stats(player_id=1, team_id=2, season=10, pa=100)
_make_stats(player_id=1, team_id=2, season=11, pa=80)
_make_state(player_id=1, team_id=1, track=batter_track)
_make_state(player_id=1, team_id=2, track=batter_track)
result_team1 = _eval(player_id=1, team_id=1)
result_team2 = _eval(player_id=1, team_id=2)
assert result_team1["current_value"] == 160.0, (
f"Team 1 multi-season value should be 160.0, got {result_team1['current_value']}"
)
assert result_team1["current_tier"] == 2, (
f"Team 1 tier should be T2 for value=160, got {result_team1['current_tier']}"
)
assert result_team2["current_value"] == 180.0, (
f"Team 2 multi-season value should be 180.0, got {result_team2['current_value']}"
)
assert result_team2["current_tier"] == 2, (
f"Team 2 tier should be T2 for value=180, got {result_team2['current_tier']}"
)
class TestDryRun:
"""dry_run=True writes current_value and last_evaluated_at but NOT current_tier
or fully_evolved, allowing apply_tier_boost() to write tier + variant atomically.
All tests use stats that would produce a tier-up (value=160 T2) on a card
seeded at tier=0, so the delta between dry and non-dry behaviour is obvious.
Stub thresholds (batter): T1=37, T2=149, T3=448, T4=896.
value=160 T2 (149 <= 160 < 448); starting current_tier=0 tier-up to T2.
"""
def test_dry_run_does_not_write_current_tier(self, batter_track):
"""dry_run=True leaves current_tier unchanged in the database.
What: Seed a card at tier=0. Provide stats that would advance to T2
(value=160). Call evaluate_card with dry_run=True. Re-read the DB row
and assert current_tier is still 0.
Why: The dry_run path must not persist the tier so that apply_tier_boost()
can write tier + variant atomically on the next step. If current_tier
were written here, a boost failure would leave the tier advanced with no
corresponding variant, causing an inconsistent state.
"""
_make_state(1, 1, batter_track, current_tier=0)
_make_stats(1, 1, 1, pa=160)
_eval(1, 1, dry_run=True)
reloaded = CardStateStub.get(
(CardStateStub.player_id == 1) & (CardStateStub.team_id == 1)
)
assert reloaded.current_tier == 0, (
f"dry_run should not write current_tier; expected 0, got {reloaded.current_tier}"
)
def test_dry_run_does_not_write_fully_evolved(self, batter_track):
"""dry_run=True leaves fully_evolved=False unchanged in the database.
What: Seed a card at tier=0 with fully_evolved=False. Provide stats that
would push to T4 (value=900). Call evaluate_card with dry_run=True.
Re-read the DB row and assert fully_evolved is still False.
Why: fully_evolved follows current_tier and must be written atomically
by apply_tier_boost(). Writing it here would let the flag get out of
sync with the tier if the boost subsequently fails.
"""
_make_state(1, 1, batter_track, current_tier=0)
_make_stats(1, 1, 1, pa=900) # value=900 → T4 → fully_evolved=True normally
_eval(1, 1, dry_run=True)
reloaded = CardStateStub.get(
(CardStateStub.player_id == 1) & (CardStateStub.team_id == 1)
)
assert reloaded.fully_evolved is False, (
"dry_run should not write fully_evolved; expected False, "
f"got {reloaded.fully_evolved}"
)
def test_dry_run_writes_current_value(self, batter_track):
"""dry_run=True DOES update current_value in the database.
What: Seed a card with current_value=0. Provide stats giving value=160.
Call evaluate_card with dry_run=True. Re-read the DB row and assert
current_value has been updated to 160.0.
Why: current_value tracks formula progress and is safe to write
at any time it does not affect game logic atomicity, so it is
always persisted regardless of dry_run.
"""
_make_state(1, 1, batter_track, current_value=0.0)
_make_stats(1, 1, 1, pa=160)
_eval(1, 1, dry_run=True)
reloaded = CardStateStub.get(
(CardStateStub.player_id == 1) & (CardStateStub.team_id == 1)
)
assert reloaded.current_value == 160.0, (
f"dry_run should still write current_value; expected 160.0, "
f"got {reloaded.current_value}"
)
def test_dry_run_writes_last_evaluated_at(self, batter_track):
"""dry_run=True DOES update last_evaluated_at in the database.
What: Seed a card with last_evaluated_at=None. Call evaluate_card with
dry_run=True. Re-read the DB row and assert last_evaluated_at is now a
non-None datetime.
Why: last_evaluated_at is a bookkeeping field used for scheduling and
audit purposes. It is safe to update independently of tier writes
and should always reflect the most recent evaluation attempt.
"""
_make_state(1, 1, batter_track)
_make_stats(1, 1, 1, pa=160)
_eval(1, 1, dry_run=True)
reloaded = CardStateStub.get(
(CardStateStub.player_id == 1) & (CardStateStub.team_id == 1)
)
assert reloaded.last_evaluated_at is not None, (
"dry_run should still write last_evaluated_at; got None"
)
def test_dry_run_returns_computed_tier(self, batter_track):
"""dry_run=True return dict has computed_tier=T2 while current_tier stays 0.
What: Seed at tier=0. Stats value=160 T2. Call dry_run=True.
Assert:
- result["computed_tier"] == 2 (what the formula says)
- result["current_tier"] == 0 (what is stored; unchanged)
Why: Callers use the divergence between computed_tier and current_tier
to detect a pending tier-up. Both keys must be present and correct for
the evaluate-game endpoint to gate apply_tier_boost() correctly.
"""
_make_state(1, 1, batter_track, current_tier=0)
_make_stats(1, 1, 1, pa=160)
result = _eval(1, 1, dry_run=True)
assert result["computed_tier"] == 2, (
f"computed_tier should reflect formula result T2; got {result['computed_tier']}"
)
assert result["current_tier"] == 0, (
f"current_tier should reflect unchanged DB value 0; got {result['current_tier']}"
)
def test_dry_run_returns_computed_fully_evolved(self, batter_track):
"""dry_run=True sets computed_fully_evolved correctly in the return dict.
What: Two sub-cases:
- Stats value=160 T2: computed_fully_evolved should be False.
- Stats value=900 T4: computed_fully_evolved should be True.
In both cases fully_evolved in the DB remains False (tier not written).
Why: computed_fully_evolved lets callers know whether the pending tier-up
will result in a fully-evolved card without having to re-query the DB
or recalculate the tier themselves. It must match (computed_tier >= 4),
not the stored fully_evolved value.
"""
# Sub-case 1: computed T2 → computed_fully_evolved=False
_make_state(1, 1, batter_track, current_tier=0)
_make_stats(1, 1, 1, pa=160)
result = _eval(1, 1, dry_run=True)
assert result["computed_fully_evolved"] is False, (
f"computed_fully_evolved should be False for T2; got {result['computed_fully_evolved']}"
)
assert result["fully_evolved"] is False, (
"stored fully_evolved should remain False after dry_run"
)
# Reset for sub-case 2: computed T4 → computed_fully_evolved=True
CardStateStub.delete().execute()
StatsStub.delete().execute()
_make_state(1, 1, batter_track, current_tier=0)
_make_stats(1, 1, 1, pa=900) # value=900 → T4
result2 = _eval(1, 1, dry_run=True)
assert result2["computed_fully_evolved"] is True, (
f"computed_fully_evolved should be True for T4; got {result2['computed_fully_evolved']}"
)
assert result2["fully_evolved"] is False, (
"stored fully_evolved should remain False after dry_run even at T4"
)

View File

@ -1,8 +1,8 @@
"""
Tests for WP-10: evolution_card_state initialization on pack opening.
Tests for WP-10: refractor_card_state initialization on pack opening.
Covers `app/services/evolution_init.py` the `initialize_card_evolution`
function that creates an EvolutionCardState row when a card is first acquired.
Covers `app/services/refractor_init.py` the `initialize_card_refractor`
function that creates an RefractorCardState row when a card is first acquired.
Test strategy:
- Unit tests for `_determine_card_type` cover all three branches (batter,
@ -18,7 +18,7 @@ Why we test idempotency:
Why we test cross-player isolation:
Two different players with the same team must each get their own
EvolutionCardState row. A bug that checked only team_id would share state
RefractorCardState row. A bug that checked only team_id would share state
across players, so we assert that state.player_id matches.
"""
@ -26,11 +26,11 @@ import pytest
from app.db_engine import (
Cardset,
EvolutionCardState,
EvolutionTrack,
RefractorCardState,
RefractorTrack,
Player,
)
from app.services.evolution_init import _determine_card_type, initialize_card_evolution
from app.services.refractor_init import _determine_card_type, initialize_card_refractor
# ---------------------------------------------------------------------------
@ -74,13 +74,13 @@ def _make_player(rarity, pos_1: str) -> Player:
)
def _make_track(card_type: str) -> EvolutionTrack:
"""Create an EvolutionTrack for the given card_type.
def _make_track(card_type: str) -> RefractorTrack:
"""Create an RefractorTrack for the given card_type.
Thresholds are kept small and arbitrary; the unit under test only
cares about card_type when selecting the track.
"""
return EvolutionTrack.create(
return RefractorTrack.create(
name=f"Track-{card_type}",
card_type=card_type,
formula="pa",
@ -116,14 +116,14 @@ class TestDetermineCardType:
"""pos_1 == 'RP' maps to card_type 'rp'.
Relief pitchers carry the 'RP' position flag and must follow a
separate evolution track with lower thresholds.
separate refractor track with lower thresholds.
"""
assert _determine_card_type(_FakePlayer("RP")) == "rp"
def test_closer_pitcher(self):
"""pos_1 == 'CP' maps to card_type 'rp'.
Closers share the RP evolution track; the spec explicitly lists 'CP'
Closers share the RP refractor track; the spec explicitly lists 'CP'
as an rp-track position.
"""
assert _determine_card_type(_FakePlayer("CP")) == "rp"
@ -154,12 +154,56 @@ class TestDetermineCardType:
# ---------------------------------------------------------------------------
# Integration tests — initialize_card_evolution
# Integration tests — initialize_card_refractor
# ---------------------------------------------------------------------------
class TestDetermineCardTypeEdgeCases:
"""T2-2: Parametrized edge cases for _determine_card_type.
Covers all the boundary inputs identified in the PO review:
DH, C, 2B (batters), empty string, None, and the compound 'SP/RP'
which contains both 'SP' and 'RP' substrings.
The function checks 'SP' before 'RP'/'CP', so 'SP/RP' resolves to 'sp'.
"""
@pytest.mark.parametrize(
"pos_1, expected",
[
# Plain batter positions
("DH", "batter"),
("C", "batter"),
("2B", "batter"),
# Empty / None — fall through to batter default
("", "batter"),
(None, "batter"),
# Compound string containing 'SP' first — must resolve to 'sp'
# because _determine_card_type checks "SP" in pos.upper() before RP/CP
("SP/RP", "sp"),
],
)
def test_position_mapping(self, pos_1, expected):
"""_determine_card_type maps each pos_1 value to the expected card_type.
What: Directly exercises _determine_card_type with the given pos_1 string.
None is handled by the `(player.pos_1 or "").upper()` guard in the
implementation, so it falls through to 'batter'.
Why: The card_type string is the key used to look up a RefractorTrack.
An incorrect mapping silently assigns the wrong thresholds to a player's
entire refractor journey. Parametrized so each edge case is a
distinct, independently reported test failure.
"""
player = _FakePlayer(pos_1)
assert _determine_card_type(player) == expected, (
f"pos_1={pos_1!r}: expected {expected!r}, "
f"got {_determine_card_type(player)!r}"
)
class TestInitializeCardEvolution:
"""Integration tests for initialize_card_evolution against in-memory SQLite.
"""Integration tests for initialize_card_refractor against in-memory SQLite.
Each test relies on the conftest autouse fixture to get a clean database.
We create tracks for all three card types so the function can always find
@ -168,9 +212,9 @@ class TestInitializeCardEvolution:
@pytest.fixture(autouse=True)
def seed_tracks(self):
"""Create one EvolutionTrack per card_type before each test.
"""Create one RefractorTrack per card_type before each test.
initialize_card_evolution does a DB lookup for a track matching the
initialize_card_refractor does a DB lookup for a track matching the
card_type. If no track exists the function must not crash (it should
log and return None), but having tracks present lets us verify the
happy path for all three types without repeating setup in every test.
@ -180,7 +224,7 @@ class TestInitializeCardEvolution:
self.rp_track = _make_track("rp")
def test_first_card_creates_state(self, rarity, team):
"""First acquisition creates an EvolutionCardState with zeroed values.
"""First acquisition creates an RefractorCardState with zeroed values.
Acceptance criteria from WP-10:
- current_tier == 0
@ -189,7 +233,7 @@ class TestInitializeCardEvolution:
- track matches the player's card_type (batter here)
"""
player = _make_player(rarity, "2B")
state = initialize_card_evolution(player.player_id, team.id, "batter")
state = initialize_card_refractor(player.player_id, team.id, "batter")
assert state is not None
assert state.player_id == player.player_id
@ -208,7 +252,7 @@ class TestInitializeCardEvolution:
"""
player = _make_player(rarity, "SS")
# First call creates the state
state1 = initialize_card_evolution(player.player_id, team.id, "batter")
state1 = initialize_card_refractor(player.player_id, team.id, "batter")
assert state1 is not None
# Simulate partial evolution progress
@ -217,22 +261,22 @@ class TestInitializeCardEvolution:
state1.save()
# Second call (duplicate card) must not reset progress
state2 = initialize_card_evolution(player.player_id, team.id, "batter")
state2 = initialize_card_refractor(player.player_id, team.id, "batter")
assert state2 is not None
# Exactly one row in the database
count = (
EvolutionCardState.select()
RefractorCardState.select()
.where(
EvolutionCardState.player == player,
EvolutionCardState.team == team,
RefractorCardState.player == player,
RefractorCardState.team == team,
)
.count()
)
assert count == 1
# Progress was NOT reset
refreshed = EvolutionCardState.get_by_id(state1.id)
refreshed = RefractorCardState.get_by_id(state1.id)
assert refreshed.current_tier == 2
assert refreshed.current_value == 250.0
@ -246,8 +290,8 @@ class TestInitializeCardEvolution:
player_a = _make_player(rarity, "LF")
player_b = _make_player(rarity, "RF")
state_a = initialize_card_evolution(player_a.player_id, team.id, "batter")
state_b = initialize_card_evolution(player_b.player_id, team.id, "batter")
state_a = initialize_card_refractor(player_a.player_id, team.id, "batter")
state_b = initialize_card_refractor(player_b.player_id, team.id, "batter")
assert state_a is not None
assert state_b is not None
@ -256,7 +300,7 @@ class TestInitializeCardEvolution:
assert state_b.player_id == player_b.player_id
def test_sp_card_gets_sp_track(self, rarity, team):
"""A starting pitcher is assigned the 'sp' EvolutionTrack.
"""A starting pitcher is assigned the 'sp' RefractorTrack.
Track selection is driven by card_type, which in turn comes from
pos_1. This test passes card_type='sp' explicitly (mirroring the
@ -264,15 +308,15 @@ class TestInitializeCardEvolution:
state links to the sp track, not the batter track.
"""
player = _make_player(rarity, "SP")
state = initialize_card_evolution(player.player_id, team.id, "sp")
state = initialize_card_refractor(player.player_id, team.id, "sp")
assert state is not None
assert state.track_id == self.sp_track.id
def test_rp_card_gets_rp_track(self, rarity, team):
"""A relief pitcher (RP or CP) is assigned the 'rp' EvolutionTrack."""
"""A relief pitcher (RP or CP) is assigned the 'rp' RefractorTrack."""
player = _make_player(rarity, "RP")
state = initialize_card_evolution(player.player_id, team.id, "rp")
state = initialize_card_refractor(player.player_id, team.id, "rp")
assert state is not None
assert state.track_id == self.rp_track.id
@ -291,7 +335,7 @@ class TestInitializeCardEvolution:
# Delete the sp track to simulate missing seed data
self.sp_track.delete_instance()
result = initialize_card_evolution(player.player_id, team.id, "sp")
result = initialize_card_refractor(player.player_id, team.id, "sp")
assert result is None
def test_card_type_from_pos1_batter(self, rarity, team):
@ -302,7 +346,7 @@ class TestInitializeCardEvolution:
"""
player = _make_player(rarity, "3B")
card_type = _determine_card_type(player)
state = initialize_card_evolution(player.player_id, team.id, card_type)
state = initialize_card_refractor(player.player_id, team.id, card_type)
assert state is not None
assert state.track_id == self.batter_track.id
@ -311,7 +355,7 @@ class TestInitializeCardEvolution:
"""_determine_card_type is wired correctly for a starting pitcher."""
player = _make_player(rarity, "SP")
card_type = _determine_card_type(player)
state = initialize_card_evolution(player.player_id, team.id, card_type)
state = initialize_card_refractor(player.player_id, team.id, card_type)
assert state is not None
assert state.track_id == self.sp_track.id
@ -320,7 +364,7 @@ class TestInitializeCardEvolution:
"""_determine_card_type correctly routes CP to the rp track."""
player = _make_player(rarity, "CP")
card_type = _determine_card_type(player)
state = initialize_card_evolution(player.player_id, team.id, card_type)
state = initialize_card_refractor(player.player_id, team.id, card_type)
assert state is not None
assert state.track_id == self.rp_track.id

View File

@ -1,12 +1,12 @@
"""
Tests for evolution-related models and BattingSeasonStats.
Tests for refractor-related models and BattingSeasonStats.
Covers WP-01 acceptance criteria:
- EvolutionTrack: CRUD and unique-name constraint
- EvolutionCardState: CRUD, defaults, unique-(player,team) constraint,
and FK resolution back to EvolutionTrack
- EvolutionTierBoost: CRUD and unique-(track, tier, boost_type, boost_target)
- EvolutionCosmetic: CRUD and unique-name constraint
- RefractorTrack: CRUD and unique-name constraint
- RefractorCardState: CRUD, defaults, unique-(player,team) constraint,
and FK resolution back to RefractorTrack
- RefractorTierBoost: CRUD and unique-(track, tier, boost_type, boost_target)
- RefractorCosmetic: CRUD and unique-name constraint
- BattingSeasonStats: CRUD with defaults, unique-(player, team, season),
and in-place stat accumulation
@ -21,21 +21,21 @@ from playhouse.shortcuts import model_to_dict
from app.db_engine import (
BattingSeasonStats,
EvolutionCardState,
EvolutionCosmetic,
EvolutionTierBoost,
EvolutionTrack,
RefractorCardState,
RefractorCosmetic,
RefractorTierBoost,
RefractorTrack,
)
# ---------------------------------------------------------------------------
# EvolutionTrack
# RefractorTrack
# ---------------------------------------------------------------------------
class TestEvolutionTrack:
"""Tests for the EvolutionTrack model.
class TestRefractorTrack:
"""Tests for the RefractorTrack model.
EvolutionTrack defines a named progression path (formula +
RefractorTrack defines a named progression path (formula +
tier thresholds) for a card type. The name column carries a
UNIQUE constraint so that accidental duplicates are caught at
the database level.
@ -60,12 +60,12 @@ class TestEvolutionTrack:
def test_track_unique_name(self, track):
"""Inserting a second track with the same name raises IntegrityError.
The UNIQUE constraint on EvolutionTrack.name must prevent two
The UNIQUE constraint on RefractorTrack.name must prevent two
tracks from sharing the same identifier, as the name is used as
a human-readable key throughout the evolution system.
"""
with pytest.raises(IntegrityError):
EvolutionTrack.create(
RefractorTrack.create(
name="Batter Track", # duplicate
card_type="sp",
formula="outs * 3",
@ -77,15 +77,15 @@ class TestEvolutionTrack:
# ---------------------------------------------------------------------------
# EvolutionCardState
# RefractorCardState
# ---------------------------------------------------------------------------
class TestEvolutionCardState:
"""Tests for EvolutionCardState, which tracks per-player evolution progress.
class TestRefractorCardState:
"""Tests for RefractorCardState, which tracks per-player refractor progress.
Each row represents one card (player) owned by one team, linked to a
specific EvolutionTrack. The model records the current tier (0-4),
specific RefractorTrack. The model records the current tier (0-4),
accumulated progress value, and whether the card is fully evolved.
"""
@ -98,9 +98,9 @@ class TestEvolutionCardState:
fully_evolved False (evolution is not complete at creation)
last_evaluated_at None (never evaluated yet)
"""
state = EvolutionCardState.create(player=player, team=team, track=track)
state = RefractorCardState.create(player=player, team=team, track=track)
fetched = EvolutionCardState.get_by_id(state.id)
fetched = RefractorCardState.get_by_id(state.id)
assert fetched.player_id == player.player_id
assert fetched.team_id == team.id
assert fetched.track_id == track.id
@ -113,34 +113,34 @@ class TestEvolutionCardState:
"""A second card state for the same (player, team) pair raises IntegrityError.
The unique index on (player, team) enforces that each player card
has at most one evolution state per team roster slot, preventing
duplicate evolution progress rows for the same physical card.
has at most one refractor state per team roster slot, preventing
duplicate refractor progress rows for the same physical card.
"""
EvolutionCardState.create(player=player, team=team, track=track)
RefractorCardState.create(player=player, team=team, track=track)
with pytest.raises(IntegrityError):
EvolutionCardState.create(player=player, team=team, track=track)
RefractorCardState.create(player=player, team=team, track=track)
def test_card_state_fk_track(self, player, team, track):
"""Accessing card_state.track returns the original EvolutionTrack instance.
"""Accessing card_state.track returns the original RefractorTrack instance.
This confirms the FK is correctly wired and that Peewee resolves
the relationship, returning an object with the same primary key and
name as the track used during creation.
"""
state = EvolutionCardState.create(player=player, team=team, track=track)
fetched = EvolutionCardState.get_by_id(state.id)
state = RefractorCardState.create(player=player, team=team, track=track)
fetched = RefractorCardState.get_by_id(state.id)
resolved_track = fetched.track
assert resolved_track.id == track.id
assert resolved_track.name == "Batter Track"
# ---------------------------------------------------------------------------
# EvolutionTierBoost
# RefractorTierBoost
# ---------------------------------------------------------------------------
class TestEvolutionTierBoost:
"""Tests for EvolutionTierBoost, the per-tier stat/rating bonus table.
class TestRefractorTierBoost:
"""Tests for RefractorTierBoost, the per-tier stat/rating bonus table.
Each row maps a (track, tier) combination to a single boost the
specific stat or rating column to buff and by how much. The four-
@ -153,14 +153,14 @@ class TestEvolutionTierBoost:
Verifies boost_type, boost_target, and boost_value are stored
and retrieved without modification.
"""
boost = EvolutionTierBoost.create(
boost = RefractorTierBoost.create(
track=track,
tier=1,
boost_type="rating",
boost_target="contact_vl",
boost_value=1.5,
)
fetched = EvolutionTierBoost.get_by_id(boost.id)
fetched = RefractorTierBoost.get_by_id(boost.id)
assert fetched.track_id == track.id
assert fetched.tier == 1
assert fetched.boost_type == "rating"
@ -174,7 +174,7 @@ class TestEvolutionTierBoost:
(e.g. Tier-1 contact_vl rating) cannot be defined twice for the
same track, which would create ambiguity during evolution evaluation.
"""
EvolutionTierBoost.create(
RefractorTierBoost.create(
track=track,
tier=2,
boost_type="rating",
@ -182,7 +182,7 @@ class TestEvolutionTierBoost:
boost_value=2.0,
)
with pytest.raises(IntegrityError):
EvolutionTierBoost.create(
RefractorTierBoost.create(
track=track,
tier=2,
boost_type="rating",
@ -192,12 +192,12 @@ class TestEvolutionTierBoost:
# ---------------------------------------------------------------------------
# EvolutionCosmetic
# RefractorCosmetic
# ---------------------------------------------------------------------------
class TestEvolutionCosmetic:
"""Tests for EvolutionCosmetic, decorative unlocks tied to evolution tiers.
class TestRefractorCosmetic:
"""Tests for RefractorCosmetic, decorative unlocks tied to evolution tiers.
Cosmetics are purely visual rewards (frames, badges, themes) that a
card unlocks when it reaches a required tier. The name column is
@ -210,14 +210,14 @@ class TestEvolutionCosmetic:
Verifies all columns including optional ones (css_class, asset_url)
are stored and retrieved.
"""
cosmetic = EvolutionCosmetic.create(
cosmetic = RefractorCosmetic.create(
name="Gold Frame",
tier_required=2,
cosmetic_type="frame",
css_class="evo-frame-gold",
asset_url="https://cdn.example.com/frames/gold.png",
)
fetched = EvolutionCosmetic.get_by_id(cosmetic.id)
fetched = RefractorCosmetic.get_by_id(cosmetic.id)
assert fetched.name == "Gold Frame"
assert fetched.tier_required == 2
assert fetched.cosmetic_type == "frame"
@ -227,16 +227,16 @@ class TestEvolutionCosmetic:
def test_cosmetic_unique_name(self):
"""Inserting a second cosmetic with the same name raises IntegrityError.
The UNIQUE constraint on EvolutionCosmetic.name prevents duplicate
The UNIQUE constraint on RefractorCosmetic.name prevents duplicate
cosmetic definitions that could cause ambiguous tier unlock lookups.
"""
EvolutionCosmetic.create(
RefractorCosmetic.create(
name="Silver Badge",
tier_required=1,
cosmetic_type="badge",
)
with pytest.raises(IntegrityError):
EvolutionCosmetic.create(
RefractorCosmetic.create(
name="Silver Badge", # duplicate
tier_required=3,
cosmetic_type="badge",

View File

@ -0,0 +1,242 @@
"""
Tests for app/seed/refractor_tracks.py seed_refractor_tracks().
What: Verify that the JSON-driven seed function correctly creates, counts,
and idempotently updates RefractorTrack rows in the database.
Why: The seed is the single source of truth for track configuration. A
regression here (duplicates, wrong thresholds, missing formula) would
silently corrupt refractor scoring for every card in the system.
Each test operates on a fresh in-memory SQLite database provided by the
autouse `setup_test_db` fixture in conftest.py. The seed reads its data
from `app/seed/refractor_tracks.json` on disk, so the tests also serve as
a light integration check between the JSON file and the Peewee model.
"""
import json
from pathlib import Path
import pytest
from app.db_engine import RefractorTrack
from app.seed.refractor_tracks import seed_refractor_tracks
# Path to the JSON fixture that the seed reads from at runtime
_JSON_PATH = Path(__file__).parent.parent / "app" / "seed" / "refractor_tracks.json"
@pytest.fixture
def json_tracks():
"""Load the raw JSON definitions so tests can assert against them.
This avoids hardcoding expected values if the JSON changes, tests
automatically follow without needing manual updates.
"""
return json.loads(_JSON_PATH.read_text(encoding="utf-8"))
def test_seed_creates_three_tracks(json_tracks):
"""After one seed call, exactly 3 RefractorTrack rows must exist.
Why: The JSON currently defines three card-type tracks (batter, sp, rp).
If the count is wrong the system would either be missing tracks
(refractor disabled for a card type) or have phantom extras.
"""
seed_refractor_tracks()
assert RefractorTrack.select().count() == 3
def test_seed_correct_card_types(json_tracks):
"""The set of card_type values persisted must match the JSON exactly.
Why: card_type is used as a discriminator throughout the refractor engine.
An unexpected value (e.g. 'pitcher' instead of 'sp') would cause
track-lookup misses and silently skip refractor scoring for that role.
"""
seed_refractor_tracks()
expected_types = {d["card_type"] for d in json_tracks}
actual_types = {t.card_type for t in RefractorTrack.select()}
assert actual_types == expected_types
def test_seed_thresholds_ascending():
"""For every track, t1 < t2 < t3 < t4.
Why: The refractor engine uses these thresholds to determine tier
boundaries. If they are not strictly ascending, tier comparisons
would produce incorrect or undefined results (e.g. a player could
simultaneously satisfy tier 3 and not satisfy tier 2).
"""
seed_refractor_tracks()
for track in RefractorTrack.select():
assert track.t1_threshold < track.t2_threshold, (
f"{track.name}: t1 ({track.t1_threshold}) >= t2 ({track.t2_threshold})"
)
assert track.t2_threshold < track.t3_threshold, (
f"{track.name}: t2 ({track.t2_threshold}) >= t3 ({track.t3_threshold})"
)
assert track.t3_threshold < track.t4_threshold, (
f"{track.name}: t3 ({track.t3_threshold}) >= t4 ({track.t4_threshold})"
)
def test_seed_thresholds_positive():
"""All tier threshold values must be strictly greater than zero.
Why: A zero or negative threshold would mean a card starts the game
already evolved (tier >= 1 at 0 accumulated stat points), which would
bypass the entire refractor progression system.
"""
seed_refractor_tracks()
for track in RefractorTrack.select():
assert track.t1_threshold > 0, f"{track.name}: t1_threshold is not positive"
assert track.t2_threshold > 0, f"{track.name}: t2_threshold is not positive"
assert track.t3_threshold > 0, f"{track.name}: t3_threshold is not positive"
assert track.t4_threshold > 0, f"{track.name}: t4_threshold is not positive"
def test_seed_formula_present():
"""Every persisted track must have a non-empty formula string.
Why: The formula is evaluated at runtime to compute a player's refractor
score. An empty formula would cause either a Python eval error or
silently produce 0 for every player, halting all refractor progress.
"""
seed_refractor_tracks()
for track in RefractorTrack.select():
assert track.formula and track.formula.strip(), (
f"{track.name}: formula is empty or whitespace-only"
)
def test_seed_idempotent():
"""Calling seed_refractor_tracks() twice must still yield exactly 3 rows.
Why: The seed is designed to be safe to re-run (e.g. as part of a
migration or CI bootstrap). If it inserts duplicates on a second call,
the unique constraint on RefractorTrack.name would raise an IntegrityError
in PostgreSQL, and in SQLite it would silently create phantom rows that
corrupt tier-lookup joins.
"""
seed_refractor_tracks()
seed_refractor_tracks()
assert RefractorTrack.select().count() == 3
# ---------------------------------------------------------------------------
# T1-4: Seed threshold ordering invariant (t1 < t2 < t3 < t4 + all positive)
# ---------------------------------------------------------------------------
def test_seed_all_thresholds_strictly_ascending_after_seed():
"""After seeding, every track satisfies t1 < t2 < t3 < t4.
What: Call seed_refractor_tracks(), then assert the full ordering chain
t1 < t2 < t3 < t4 for every row in the database. Also assert that all
four thresholds are strictly positive (> 0).
Why: The refractor tier engine uses these thresholds as exclusive partition
points. If any threshold is out-of-order or zero the tier assignment
becomes incorrect or undefined. This test is the authoritative invariant
guard; if a JSON edit accidentally violates the ordering this test fails
loudly before any cards are affected.
Separate from test_seed_thresholds_ascending which was written earlier
this test combines ordering + positivity into a single explicit assertion
block and uses more descriptive messages to aid debugging.
"""
seed_refractor_tracks()
for track in RefractorTrack.select():
assert track.t1_threshold > 0, (
f"{track.name}: t1_threshold={track.t1_threshold} is not positive"
)
assert track.t2_threshold > 0, (
f"{track.name}: t2_threshold={track.t2_threshold} is not positive"
)
assert track.t3_threshold > 0, (
f"{track.name}: t3_threshold={track.t3_threshold} is not positive"
)
assert track.t4_threshold > 0, (
f"{track.name}: t4_threshold={track.t4_threshold} is not positive"
)
assert (
track.t1_threshold
< track.t2_threshold
< track.t3_threshold
< track.t4_threshold
), (
f"{track.name}: thresholds are not strictly ascending: "
f"t1={track.t1_threshold}, t2={track.t2_threshold}, "
f"t3={track.t3_threshold}, t4={track.t4_threshold}"
)
# ---------------------------------------------------------------------------
# T2-10: Duplicate card_type tracks guard
# ---------------------------------------------------------------------------
def test_seed_each_card_type_has_exactly_one_track():
"""Each card_type must appear exactly once across all RefractorTrack rows.
What: After seeding, group the rows by card_type and assert that every
card_type has a count of exactly 1.
Why: RefractorTrack rows are looked up by card_type (e.g.
RefractorTrack.get(card_type='batter')). If a card_type appears more
than once, Peewee's .get() raises MultipleObjectsReturned, crashing
every pack opening and card evaluation for that type. This test acts as
a uniqueness contract so that seed bugs or accidental DB drift surface
immediately.
"""
seed_refractor_tracks()
from peewee import fn as peewee_fn
# Group by card_type and count occurrences
query = (
RefractorTrack.select(
RefractorTrack.card_type, peewee_fn.COUNT(RefractorTrack.id).alias("cnt")
)
.group_by(RefractorTrack.card_type)
.tuples()
)
for card_type, count in query:
assert count == 1, (
f"card_type={card_type!r} has {count} tracks; expected exactly 1"
)
def test_seed_updates_on_rerun(json_tracks):
"""A second seed call must restore any manually changed threshold to the JSON value.
What: Seed once, manually mutate a threshold in the DB, then seed again.
Assert that the threshold is now back to the JSON-defined value.
Why: The seed must act as the authoritative source of truth. If
re-seeding does not overwrite local changes, configuration drift can
build up silently and the production database would diverge from the
checked-in JSON without any visible error.
"""
seed_refractor_tracks()
# Pick the first track and corrupt its t1_threshold
first_def = json_tracks[0]
track = RefractorTrack.get(RefractorTrack.name == first_def["name"])
original_t1 = track.t1_threshold
corrupted_value = original_t1 + 9999
track.t1_threshold = corrupted_value
track.save()
# Confirm the corruption took effect before re-seeding
track_check = RefractorTrack.get(RefractorTrack.name == first_def["name"])
assert track_check.t1_threshold == corrupted_value
# Re-seed — should restore the JSON value
seed_refractor_tracks()
restored = RefractorTrack.get(RefractorTrack.name == first_def["name"])
assert restored.t1_threshold == first_def["t1_threshold"], (
f"Expected t1_threshold={first_def['t1_threshold']} after re-seed, "
f"got {restored.t1_threshold}"
)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,336 @@
"""Integration tests for the refractor track catalog API endpoints (WP-06).
Tests cover:
GET /api/v2/refractor/tracks
GET /api/v2/refractor/tracks/{track_id}
All tests require a live PostgreSQL connection (POSTGRES_HOST env var) and
assume the refractor schema migration (WP-04) has already been applied.
Tests auto-skip when POSTGRES_HOST is not set.
Test data is inserted via psycopg2 before the test module runs and deleted
afterwards so the tests are repeatable. ON CONFLICT keeps the table clean
even if a previous run did not complete teardown.
Tier 3 tests (T3-1) in this file use a SQLite-backed TestClient so they run
without a PostgreSQL connection. They test the card_type filter edge cases:
an unrecognised card_type string and an empty string should both return an
empty list (200 with count=0) rather than an error.
"""
import os
import pytest
from fastapi import FastAPI, Request
from fastapi.testclient import TestClient
from peewee import SqliteDatabase
os.environ.setdefault("API_TOKEN", "test-token")
from app.db_engine import ( # noqa: E402
BattingSeasonStats,
Card,
Cardset,
Decision,
Event,
MlbPlayer,
Pack,
PackType,
PitchingSeasonStats,
Player,
ProcessedGame,
Rarity,
RefractorCardState,
RefractorCosmetic,
RefractorTierBoost,
RefractorTrack,
Roster,
RosterSlot,
ScoutClaim,
ScoutOpportunity,
StratGame,
StratPlay,
Team,
)
POSTGRES_HOST = os.environ.get("POSTGRES_HOST")
_skip_no_pg = pytest.mark.skipif(
not POSTGRES_HOST, reason="POSTGRES_HOST not set — integration tests skipped"
)
AUTH_HEADER = {"Authorization": f"Bearer {os.environ.get('API_TOKEN', 'test-token')}"}
_SEED_TRACKS = [
("Batter", "batter", "pa+tb*2", 37, 149, 448, 896),
("Starting Pitcher", "sp", "ip+k", 10, 40, 120, 240),
("Relief Pitcher", "rp", "ip+k", 3, 12, 35, 70),
]
@pytest.fixture(scope="module")
def seeded_tracks(pg_conn):
"""Insert three canonical evolution tracks; remove them after the module.
Uses ON CONFLICT DO UPDATE so the fixture is safe to run even if rows
already exist from a prior test run that did not clean up. Returns the
list of row IDs that were upserted.
"""
cur = pg_conn.cursor()
ids = []
for name, card_type, formula, t1, t2, t3, t4 in _SEED_TRACKS:
cur.execute(
"""
INSERT INTO refractor_track
(name, card_type, formula, t1_threshold, t2_threshold, t3_threshold, t4_threshold)
VALUES (%s, %s, %s, %s, %s, %s, %s)
ON CONFLICT (card_type) DO UPDATE SET
name = EXCLUDED.name,
formula = EXCLUDED.formula,
t1_threshold = EXCLUDED.t1_threshold,
t2_threshold = EXCLUDED.t2_threshold,
t3_threshold = EXCLUDED.t3_threshold,
t4_threshold = EXCLUDED.t4_threshold
RETURNING id
""",
(name, card_type, formula, t1, t2, t3, t4),
)
ids.append(cur.fetchone()[0])
pg_conn.commit()
yield ids
cur.execute("DELETE FROM refractor_track WHERE id = ANY(%s)", (ids,))
pg_conn.commit()
@pytest.fixture(scope="module")
def client():
"""FastAPI TestClient backed by the real PostgreSQL database."""
from app.main import app
with TestClient(app) as c:
yield c
@_skip_no_pg
def test_list_tracks_returns_count_3(client, seeded_tracks):
"""GET /tracks returns all three tracks with count=3.
After seeding batter/sp/rp, the table should have exactly those three
rows (no other tracks are inserted by other test modules).
"""
resp = client.get("/api/v2/refractor/tracks", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["count"] == 3
assert len(data["items"]) == 3
@_skip_no_pg
def test_filter_by_card_type(client, seeded_tracks):
"""card_type=sp filter returns exactly 1 track with card_type 'sp'."""
resp = client.get("/api/v2/refractor/tracks?card_type=sp", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["count"] == 1
assert data["items"][0]["card_type"] == "sp"
@_skip_no_pg
def test_get_single_track_with_thresholds(client, seeded_tracks):
"""GET /tracks/{id} returns a track dict with formula and t1-t4 thresholds."""
track_id = seeded_tracks[0] # batter
resp = client.get(f"/api/v2/refractor/tracks/{track_id}", headers=AUTH_HEADER)
assert resp.status_code == 200
data = resp.json()
assert data["card_type"] == "batter"
assert data["formula"] == "pa+tb*2"
for key in ("t1_threshold", "t2_threshold", "t3_threshold", "t4_threshold"):
assert key in data, f"Missing field: {key}"
assert data["t1_threshold"] == 37
assert data["t4_threshold"] == 896
@_skip_no_pg
def test_404_for_nonexistent_track(client, seeded_tracks):
"""GET /tracks/999999 returns 404 when the track does not exist."""
resp = client.get("/api/v2/refractor/tracks/999999", headers=AUTH_HEADER)
assert resp.status_code == 404
@_skip_no_pg
def test_auth_required(client, seeded_tracks):
"""Requests without a Bearer token return 401 for both endpoints."""
resp_list = client.get("/api/v2/refractor/tracks")
assert resp_list.status_code == 401
track_id = seeded_tracks[0]
resp_single = client.get(f"/api/v2/refractor/tracks/{track_id}")
assert resp_single.status_code == 401
# ===========================================================================
# SQLite-backed tests for T3-1: invalid card_type query parameter
#
# These tests run without a PostgreSQL connection. They verify that the
# card_type filter on GET /api/v2/refractor/tracks handles values that match
# no known track (an unrecognised string, an empty string) gracefully: the
# endpoint must return 200 with {"count": 0, "items": []}, not a 4xx/5xx.
# ===========================================================================
_track_api_db = SqliteDatabase(
"file:trackapitest?mode=memory&cache=shared",
uri=True,
pragmas={"foreign_keys": 1},
)
_TRACK_API_MODELS = [
Rarity,
Event,
Cardset,
MlbPlayer,
Player,
Team,
PackType,
Pack,
Card,
Roster,
RosterSlot,
StratGame,
StratPlay,
Decision,
ScoutOpportunity,
ScoutClaim,
BattingSeasonStats,
PitchingSeasonStats,
ProcessedGame,
RefractorTrack,
RefractorCardState,
RefractorTierBoost,
RefractorCosmetic,
]
@pytest.fixture(autouse=False)
def setup_track_api_db():
"""Bind track-API test models to shared-memory SQLite and create tables.
Inserts exactly two tracks (batter, sp) so the filter tests have a
non-empty table to query against confirming that the WHERE predicate
excludes them rather than the table simply being empty.
"""
_track_api_db.bind(_TRACK_API_MODELS)
_track_api_db.connect(reuse_if_open=True)
_track_api_db.create_tables(_TRACK_API_MODELS)
# Seed two real tracks so the table is not empty
RefractorTrack.get_or_create(
name="T3-1 Batter Track",
defaults=dict(
card_type="batter",
formula="pa + tb * 2",
t1_threshold=37,
t2_threshold=149,
t3_threshold=448,
t4_threshold=896,
),
)
RefractorTrack.get_or_create(
name="T3-1 SP Track",
defaults=dict(
card_type="sp",
formula="ip + k",
t1_threshold=10,
t2_threshold=40,
t3_threshold=120,
t4_threshold=240,
),
)
yield _track_api_db
_track_api_db.drop_tables(list(reversed(_TRACK_API_MODELS)), safe=True)
def _build_track_api_app() -> FastAPI:
"""Minimal FastAPI app containing only the refractor router for T3-1 tests."""
from app.routers_v2.refractor import router as refractor_router
app = FastAPI()
@app.middleware("http")
async def db_middleware(request: Request, call_next):
_track_api_db.connect(reuse_if_open=True)
return await call_next(request)
app.include_router(refractor_router)
return app
@pytest.fixture
def track_api_client(setup_track_api_db):
"""FastAPI TestClient for the SQLite-backed T3-1 track filter tests."""
with TestClient(_build_track_api_app()) as c:
yield c
# ---------------------------------------------------------------------------
# T3-1a: card_type=foo (unrecognised value) returns empty list
# ---------------------------------------------------------------------------
def test_invalid_card_type_returns_empty_list(setup_track_api_db, track_api_client):
"""GET /tracks?card_type=foo returns 200 with count=0, not a 4xx/5xx.
What: Query the track list with a card_type value ('foo') that matches
no row in refractor_track. The table contains batter and sp tracks so
the result must be an empty list rather than a full list (which would
indicate the filter was ignored).
Why: The endpoint applies `WHERE card_type == card_type` when the
parameter is not None. An unrecognised value is a valid no-match query
the contract is an empty list, not a validation error. Returning
a 422 Unprocessable Entity or 500 here would break clients that probe
for tracks by card type before knowing which types are registered.
"""
resp = track_api_client.get(
"/api/v2/refractor/tracks?card_type=foo", headers=AUTH_HEADER
)
assert resp.status_code == 200
data = resp.json()
assert data["count"] == 0, (
f"Expected count=0 for unknown card_type 'foo', got {data['count']}"
)
assert data["items"] == [], (
f"Expected empty items list for unknown card_type 'foo', got {data['items']}"
)
# ---------------------------------------------------------------------------
# T3-1b: card_type= (empty string) returns empty list
# ---------------------------------------------------------------------------
def test_empty_string_card_type_returns_empty_list(
setup_track_api_db, track_api_client
):
"""GET /tracks?card_type= (empty string) returns 200 with count=0.
What: Pass an empty string as the card_type query parameter. No track
has card_type='' so the response must be an empty list with count=0.
Why: An empty string is not None FastAPI will pass it through as ''
rather than treating it as an absent parameter. The WHERE predicate
`card_type == ''` produces no matches, which is the correct silent
no-results behaviour. This guards against regressions where an empty
string might be mishandled as a None/absent value and accidentally return
all tracks, or raise a server error.
"""
resp = track_api_client.get(
"/api/v2/refractor/tracks?card_type=", headers=AUTH_HEADER
)
assert resp.status_code == 200
data = resp.json()
assert data["count"] == 0, (
f"Expected count=0 for empty card_type string, got {data['count']}"
)
assert data["items"] == [], (
f"Expected empty items list for empty card_type string, got {data['items']}"
)