perf: parallelize per-pack card fetches after roll_for_cards (#99) #167
No reviewers
Labels
No Label
ai-changes-requested
ai-failed
ai-pr-opened
ai-reviewed
ai-reviewing
ai-working
autonomous
bug
enhancement
feature
in-queue
performance
security
size:M
size:S
tech-debt
tests
todo
type:feature
type:stability
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cal/paper-dynasty-discord#167
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "issue/99-perf-parallelize-per-pack-card-fetches-after-roll"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Closes #99
Changes
Replaced the sequential
forloop that fetched cards one pack at a time withasyncio.gather()inopen_st_pr_packs()(helpers/main.py).Before: 5 packs = 5 sequential
db_get("cards", ...)round-trips.After: All pack card fetches fire concurrently via a nested
_fetch_pack_cardscoroutine +asyncio.gather(). Thepack_iddefault injection (card.setdefault("pack_id", p_id)) is preserved per-pack inside the coroutine. Results are flattened intoall_cardswith a list comprehension.Files Changed
helpers/main.py— lines 1764–1771 (7 insertions, 5 deletions)Notes
No test suite covers this path. Ruff lint passes.
AI Code Review
Files Reviewed
helpers/main.py(modified — lines 1764–1777)Findings
Correctness
asyncio.gather()preserves result order matchingpack_idsinput order — flattenedall_cardsordering is identical to the original sequential build.card.setdefault("pack_id", p_id)runs inside the coroutine beforereturn result["cards"], so all returned cards carrypack_id. Mutation is correctly scoped per-pack.gather(noreturn_exceptions=True) propagates the first raised exception to the caller, same as the original sequential loop. Remaining tasks continue running but their results are discarded — no behavioral regression.result["cards"]KeyError risk is identical to the originalnew_cards["cards"]— not a regression.Security
Style & Conventions
asyncio.gather) and PR #157 (card embedasyncio.gather) already merged to main.asyncio.gather.Suggestions
Verdict: APPROVED
Clean parallelization. Ordering, mutation, and error behavior all preserved. Follows established project patterns.
(Posted as COMMENT — Gitea blocks self-approval)
Automated review by Claude PR Reviewer
AI Code Review
Files Reviewed
helpers/main.py(modified — lines 1764–1777)Findings
Correctness
No issues found.
asynciois imported at line 1. ✓pack_idinjection viacard.setdefault("pack_id", p_id)preserved inside_fetch_pack_cardscoroutine. ✓[card for pack_cards in results for card in pack_cards]is correct — same card order within each pack, packs in input order (gather preserves order). ✓pack_idscase already guarded byif not pack_ids: raise ValueErrorbefore the gather is reached. ✓if not all_cards:guard preserved post-gather. ✓db_getcall raises, the exception propagates fromasyncio.gather. Sibling tasks continue running to completion but their results are discarded — same net outcome as a sequential loop where the first failure exits. ✓Security
No issues found.
Style & Conventions
No issues found. Pattern matches the
asyncio.gatherusage already in this same function (line 1789,create_scout_opportunityparallelization from PR #156) and PR #157 (get_card_embedsparallelization).Suggestions
asyncio.gather(..., return_exceptions=True)+ explicit re-raise if you want to log which specific pack fetch failed before propagating. Not required — the outerexcept+ error logger in callers covers this adequately. Pure opt-in observability improvement.Verdict: APPROVED
Clean parallel refactor. Logic is correct,
pack_idinjection preserved per-pack, flattening is correct, existing error guards are intact. Consistent with the gather pattern used in two adjacent PRs already merged to main.Automated review by Claude PR Reviewer
1b885ca66fto6b475ba439