fix: prevent paperdex timeout on unfiltered requests (#102) #103
No reviewers
Labels
No Label
ai-changes-requested
ai-failed
ai-merged
ai-pr-opened
ai-reviewed
ai-reviewing
ai-reviewing
ai-working
bug
enhancement
evolution
performance
phase-0
phase-1a
phase-1b
phase-1c
phase-1d
security
tech-debt
todo
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cal/paper-dynasty-database#103
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "ai/paper-dynasty-database#102"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Closes #102
Summary
GET /api/v2/paperdextimed out on unfiltered (or lightly filtered) requests due to two unbounded operations inget_paperdex:COUNT(*):all_dex.count()was called on the full table before anyWHEREfilters were applied — a full-table scan on every request.COUNT(*):all_dex.count()was called again in the response body, then the full queryset was iterated separately — two queries for what should be one.LIMITclause meant the serialization loop (model_to_dict(x, recurse=True)) could run on the entire table.Changes (
app/routers_v2/paperdex.py)all_dex.count() == 0check (and its stale commented-out sibling)limit: Optional[int] = 500query parameter — defaults to 500, applied after all filtersall_dex.count()in the response withitems = list(all_dex)+len(items)— single query, no double-countall_setsvariable (was blocking the ruff pre-commit hook)Other observations
get_paperdex.team_id,player_id,id) — these are pre-existing, not introduced by this PR.AI Code Review
Files Reviewed
app/routers_v2/paperdex.py(modified)Findings
Correctness
count()removal (line 31): Correct. The oldall_dex.count() == 0check ran a full-tableCOUNT(*)before anyWHEREclauses on every request. Removing it eliminates the scan. The behavior change (empty result returns{"count": 0, "paperdex": []}instead of 404) is better REST semantics.items = list(all_dex)+len(items)(lines 62–63): Correct. Materializes the queryset exactly once;len()on a Python list is O(1) with no second DB query. Fixes the double-COUNT(*)described in the PR body.all_setsremoval (cardset branch): Correct —all_setswas assigned but never referenced. The actual filterPaperdex.player.cardset.id == cardset_idis unchanged and correct.limitapplied beforeif csv:branch (lines 48–49): The LIMIT is applied toall_dexbefore the CSV path executes, so CSV exports are also capped at 500 rows by default. This is a behavior change for any CSV consumers who previously relied on unbounded exports. Callers must now pass?limit=Nexplicitly for full exports. Not a bug, but worth noting.Security
logging.warning("Bad Token: [REDACTED]")preserved correctly across all three auth-gated endpoints — no token leakage.Style & Conventions
get_paperdex. Acceptable.raise HTTPException(status_code=200, ...)indelete_paperdex(line 182–184) is a pre-existing bug (should bereturn), not introduced by this PR — out of scope.Suggestions
limitrows (default 500). Callers may not discover this until they see truncated CSV exports.limit=0edge case:?limit=0issuesLIMIT 0to PostgreSQL and returns an empty result. If that's semantically meaningless, age=1constraint (limit: Optional[int] = Query(500, ge=1)) would guard it. Minor suggestion only.Verdict: APPROVED
Core fix is correct and well-scoped. The two performance issues (pre-filter full-table
COUNT(*)and double-query pattern) are both addressed accurately. The newlimitparameter defaults sensibly and is applied in the right place. Formatting changes are cosmetic and consistent with established project patterns.Automated review by Claude PR Reviewer (posted as COMMENT due to Gitea self-review restriction — verdict is APPROVED)
Approved per PO triage.