perf: cache user team lookup in player_autocomplete, reduce limit to 25 #100
No reviewers
Labels
No Label
ai-changes-requested
ai-pr-opened
ai-reviewed
ai-reviewing
ai-working
in-next-release
status/in-progress
status/pr-open
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cal/major-domo-v2#100
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "ai/major-domo-v2#99"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
Closes #99
_get_cached_user_team()with a 60-second per-user TTL soget_user_major_league_teamis called at most once per minute instead of on every keystrokesearch_players(limit=50)→limit=25to match Discord's 25-choice display cap and avoid fetching unused resultsTestGetCachedUserTeamtest class covering cache hit, TTL expiry, andNonecaching; addclear_user_team_cacheautouse fixture to prevent test interference via module-level stateTest plan
python -m pytest tests/test_utils_autocomplete.py -v— all 16 tests pass (3 new cache tests)python -m pytest --tb=short -q— 956 passed, 2 skipped🤖 Generated with Claude Code
AI Code Review
Files Reviewed
utils/autocomplete.py(modified)tests/test_utils_autocomplete.py(modified)Findings
Correctness
_get_cached_user_teamfunction is logically sound. Cache lookup checks TTL correctly usingtime.time() - cached_at < _USER_TEAM_CACHE_TTL, and the fallthrough path callsget_user_major_league_team(user_id)and writes the result back to the dict unconditionally — includingNone— which matches the PR's stated goal of caching "no team" users too.limit=50tolimit=25is correct. The existing display logic already capped output at 25 (remaining_slots = 25 - len(choices)anduser_team_players[:15]), so the previous 50-item fetch was discarding up to half the results every time.Teamtype added to themodels.teamimport is needed for the_user_team_cachedict annotation — no phantom import._USER_TEAM_CACHE_TTLis exported as a module-level name so the TTL-expiry test can import it to seed the stale cache entry. Pairing the constant with the test that depends on it is the right approach.test_player_autocomplete_prioritizes_user_team, the test patchesutils.autocomplete.get_user_major_league_teamvia plainpatch(...). Sinceget_user_major_league_teamisasync def, Python 3.8+patchauto-creates anAsyncMock, somock_get_team.return_value = user_teamis valid and theawaitinside_get_cached_user_teamresolves correctly.Security
interaction.user.id(a Discord snowflake int), which is safe to use as a dict key.Style & Conventions
_get_cached_user_teamfollows the existing underscore-prefix convention for module-private names.autousefixture in bothTestPlayerAutocompleteandTestGetCachedUserTeamclears_user_team_cachebefore and after each test, correctly handling the shared module-level state. The fixture inTestPlayerAutocompleteis defensive (tests there patch the underlying function anyway) but it is the right habit given the cache now exists.with (...)context manager syntax) throughout the test file is consistent and expected from the project's linter.Suggestions
_user_team_cachedict is unbounded. In normal bot operation with a fixed set of Discord users this is fine — a league has a small number of owners — but it is worth noting that entries are never evicted except by TTL expiry on next access. A simpleif len(_user_team_cache) > 500: _user_team_cache.clear()safety valve could prevent any pathological growth, though this is not a realistic concern for this codebase size.test_player_autocomplete_successandtest_player_autocomplete_with_team_infotests do not patchget_user_major_league_team. Because_get_cached_user_teamwill now attempt a real call (or raise, sinceteam_serviceis not mocked in those tests), these tests rely on the exception being swallowed by the outerexcept Exception: return []block inplayer_autocomplete. The tests still pass, but theuser_teambeing silentlyNonedue to an error rather than legitimatelyNoneis a subtle difference in what the tests actually cover. Not a blocker — the behavior is correct.Verdict: APPROVED
Clean, targeted optimization. The caching implementation is correct, the limit reduction eliminates wasted API fetches, and the three new tests (cache hit, TTL expiry, None caching) plus the
autouseisolation fixture give solid coverage of the new code path. No convention violations, no security issues.Automated review by Claude PR Reviewer