feat: capture total_count before limit across all paginated endpoints #169
No reviewers
Labels
No Label
ai-changes-requested
ai-failed
ai-merged
ai-pr-opened
ai-reviewed
ai-reviewing
ai-reviewing
ai-working
bug
enhancement
evolution
performance
phase-0
phase-1a
phase-1b
phase-1c
phase-1d
security
tech-debt
todo
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cal/paper-dynasty-database#169
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "enhancement/total-count-pagination"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
total_count = queryset.count()before.limit()across 14 paginated endpoints so the JSON responsecountfield reflects total matching records, not page sizetotal_countwithif not csvternary to skip the COUNT query on CSV export paths (no regression)rewards.pyfrom 3 redundant COUNT queries down to 1scout_claims.pydoubleif limitblockscout_opportunities.pyfrommax(1,...)tomax(0,...)for consistencyFiles changed
All in
app/routers_v2/:awards, batstats, battingcardratings, events, gamerewards, gauntletrewards, mlbplayers, pitchingcardratings, pitstats, results, rewards, scout_claims, scout_opportunities, stratgame
Notes
gauntletruns.pywas already accidentally correct (inline.limit()) — left unchangednotifications.pyalready had the correct pattern from PR #150 — not modifiedAI Code Review
Files Reviewed
app/routers_v2/awards.py(modified)app/routers_v2/batstats.py(modified — ruff reformatted)app/routers_v2/battingcardratings.py(modified)app/routers_v2/events.py(modified — ruff reformatted)app/routers_v2/gamerewards.py(modified)app/routers_v2/gauntletrewards.py(modified)app/routers_v2/mlbplayers.py(modified)app/routers_v2/pitchingcardratings.py(modified)app/routers_v2/pitstats.py(modified)app/routers_v2/results.py(modified)app/routers_v2/rewards.py(modified — COUNT consolidation)app/routers_v2/scout_claims.py(modified — double-limit cleanup)app/routers_v2/scout_opportunities.py(modified — max(1,…) → max(0,…) + decoded from base64)app/routers_v2/stratgame.py(modified)Findings
Correctness
Core fix — verified correct across all 14 files. The
total_count = queryset.count()call is placed after all filter predicates are applied but before.limit()in every changed endpoint. The Peewee queryset is lazy, socount()at this point issuesSELECT COUNT(*) FROM ... WHERE <all filters>— exactly the total-matches semantics that was broken before. The JSONcountfield now accurately reflects total matching records regardless of page size.gauntletrewards.py— no CSV path, unconditional count is correct. This endpoint has no CSV branch, so the simplertotal_count = all_rewards.count()(no ternary guard) is the right call. Consistent with the endpoint's design.rewards.pyconsolidation — correct and an improvement. The removed pre-filter guard (if all_rewards.count() == 0before any filters are applied) was logically wrong — it would raise 404 if the entire rewards table was empty rather than if the filtered result set was empty. The singletotal_count = all_rewards.count()after filters, reused for both the 404 guard and the responsecountfield, is the correct approach and saves two extra COUNT queries.scout_claims.py— count is correctly placed before the optional limit branch.total_count = query.count()fires before theif limit is not Noneblock, so it always reflects the full untruncated result set. Correct.scout_opportunities.py—max(1, ...)→max(0, ...)normalization. Aligns limit clamping with every other endpoint in the codebase. Callers passinglimit=0previously got 1 row; they will now correctly get 0. Also note: the file was previously stored as base64 content in git (the old diff shows a single binary-encoded line), and is now a proper readable text file — this is a correctness fix in itself.battingcardratings.pyandpitchingcardratings.pyapply the inlinemax(0, min(limit, 500))expression directly in.limit()without an intermediate variable.total_countis correctly assigned on the line immediately before in both cases. Correct.No logic errors, off-by-one issues, or missed endpoints found.
Edge Cases
CSV paths —
if not csvternary is safe. Whencsv=True,total_countis set to0and the count query is skipped entirely. The CSV response path never readstotal_count, so the0placeholder is never exposed to callers. No regression.csv=None(default) treated as falsy — correct. The parameter defaults toNonein most endpoints.if not csvevaluatesnot NoneasTrue, so the count query fires on the default path as expected. The CSV skip only triggers whencsv=Trueis explicitly passed.rewards.py404 on empty filtered results — the count-then-404 pattern works correctly. This is the only endpoint in the PR that raises 404 on an empty result set; the others return{"count": 0, "results": []}. This pre-existing asymmetry is preserved unchanged and is out of scope for this PR.Security
No issues found. No new inputs, no new query construction paths, no credentials introduced. Changes are purely structural.
Style & Conventions
Ruff reformatting in
batstats.py,events.py, andscout_opportunities.pyis large in line count but mechanically correct — single/double quote normalization, multi-line argument lists, trailing commas. No behavioral changes. Pattern consistency across all 14 files is confirmed: every non-CSV endpoint now uses a pre-limittotal_countreturned as thecountfield, with the ternary guard applied uniformly on all CSV-capable endpoints.Suggestions
rewards.py404-on-empty is an outlier compared to the rest of the API surface. Not a blocker, but worth standardizing in a follow-up to avoid surprising API consumers who expect a consistent{"count": 0, ...}shape everywhere.scout_opportunities.pycomputeslimit = max(0, min(limit, 500))at the top of the handler before filters are applied (pre-existing). Other endpoints clamp just before.limit(). Harmless, but could be normalized in a future cleanup pass.Verdict: APPROVED (self-review — Gitea blocks self-approval)
The implementation is correct across all 14 files.
total_countis consistently captured after filter predicates and before.limit()in every endpoint. Theif not csvternary correctly skips the COUNT query on CSV paths without exposing the placeholder value to callers. Therewards.pyconsolidation eliminates a logically flawed pre-filter 404 guard and reduces from 3 to 1 COUNT query. No security issues, no regressions found.Automated review by Claude PR Reviewer
Approved — clean implementation across all 14 endpoints.
AI Code Review
Files Reviewed
app/routers_v2/awards.py(modified)app/routers_v2/batstats.py(modified)app/routers_v2/battingcardratings.py(modified)app/routers_v2/events.py(modified)app/routers_v2/gamerewards.py(modified)app/routers_v2/gauntletrewards.py(modified)app/routers_v2/mlbplayers.py(modified)app/routers_v2/pitchingcardratings.py(modified)app/routers_v2/pitstats.py(modified)app/routers_v2/results.py(modified)app/routers_v2/rewards.py(modified)app/routers_v2/scout_claims.py(modified)app/routers_v2/scout_opportunities.py(modified)app/routers_v2/stratgame.py(modified)Findings
Correctness
total_count = queryset.count()is computed before.limit()is applied, socountin the JSON response now reflects total matching records, not page size. This is the intended fix.if not csv) is correctly applied in all files that have a CSV path (awards, batstats, battingcardratings, events, gamerewards, mlbplayers, pitchingcardratings, pitstats, results, stratgame).gauntletrewards.pycorrectly omits the guard since it has no CSV path.rewards.pyconsolidation is correct. The removed pre-filter check (if all_rewards.count() == 0: raise 404 "There are no rewards to filter") was redundant — if the table is empty, the post-filter check still catches it. Going from 3 COUNT queries to 1 is a clean improvement, andtotal_countis computed unconditionally because it's needed for the zero-check regardless of the CSV path.scout_claims.pycleanup is correct.total_countcaptured before theif limit is not Noneguard, returned in place oflen(results).Notable: scout_opportunities.py was base64-corrupted
The diff reveals the old
scout_opportunities.pycontained a single line of base64-encoded text (aW1wb3J0IGpzb24K...) rather than Python source. This decodes to the same Python code shown in the new file. The corruption likely originated from a git API commit (Gitea'supdate_filetool encodes content as base64 for the API payload; if that encoding was accidentally stored literally, the file would be unrunnable). The endpoint was broken — FastAPI would have raised aSyntaxErroron import. This PR silently fixes that corruption alongside the count fix. Worth confirming the endpoint was healthy in prod before merging, but the fix itself is correct.Missing endpoints (MINOR)
The PR title says "across all paginated endpoints" but
paperdex.py(PR #167) andcardpositions.py(PR #168) are also paginated endpoints and were not updated. Both still returncount= limited count, not total. Follow-up PR recommended to close the gap, or update the title to clarify scope.Security
Style & Conventions
batstats.py,events.py,scout_opportunities.py(single→double quotes, parameter reformatting) is non-functional and consistent with prior PRs in this series.gauntletrewards.pyomitsif not csvguard (correct — no CSV path); the inconsistent-looking pattern is intentional and correct.Verdict: COMMENT
Solid correctness fix. The pattern is applied uniformly and accurately. The scout_opportunities.py base64 corruption was a latent breakage that this PR inadvertently resolved — worth confirming no prod incidents occurred in the window between that corruption and this merge. The only gaps are paperdex and cardpositions (missed in scope), which should be a one-line fix each.
Automated review by Claude PR Reviewer