Sync: update agents, paper-dynasty skills, sessions

- agents: issue-worker.md and pr-reviewer.md updated for standard
  branch naming (issue/<number>-<slug> instead of ai/<repo>#<number>)
- paper-dynasty: updated SKILL.md, generate_summary, smoke_test,
  validate_database scripts; added ecosystem_status.sh and plan/
- plugins: updated marketplace submodules and blocklist
- sessions: rotate session files, add session-analysis/
- settings: updated settings.json

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Cal Corum 2026-03-23 23:03:10 -05:00
parent 25722c5164
commit 0fa8486e93
29 changed files with 2642 additions and 326 deletions

View File

@ -30,10 +30,10 @@ You are an autonomous agent that fixes a single Gitea issue and opens a PR for h
5. **Explore the code.** Read relevant files. Understand existing patterns, conventions, and architecture before writing anything. 5. **Explore the code.** Read relevant files. Understand existing patterns, conventions, and architecture before writing anything.
6. **Create a feature branch.** 6. **Create a feature branch.** Use the branch name provided in your prompt.
```bash ```bash
# Use -B to handle retries where the branch may already exist # Use -B to handle retries where the branch may already exist
git checkout -B ai/<repo>-<issue_number> git checkout -B <branch_name>
``` ```
7. **Implement the fix.** Follow the repo's existing conventions. Keep changes minimal and focused. Check imports. Don't over-engineer. 7. **Implement the fix.** Follow the repo's existing conventions. Keep changes minimal and focused. Check imports. Don't over-engineer.
@ -71,7 +71,7 @@ You are an autonomous agent that fixes a single Gitea issue and opens a PR for h
11. **Push the branch.** 11. **Push the branch.**
```bash ```bash
git push -u origin ai/<repo>-<issue_number> git push -u origin <branch_name>
``` ```
12. **Create a PR** via `mcp__gitea-mcp__create_pull_request`: 12. **Create a PR** via `mcp__gitea-mcp__create_pull_request`:
@ -80,7 +80,7 @@ You are an autonomous agent that fixes a single Gitea issue and opens a PR for h
- `title`: "fix: <concise title> (#<issue_number>)" - `title`: "fix: <concise title> (#<issue_number>)"
- `body`: Must start with `Closes #<issue_number>` on its own line (Gitea auto-close keyword), followed by a summary of changes, what was fixed, files changed, test results - `body`: Must start with `Closes #<issue_number>` on its own line (Gitea auto-close keyword), followed by a summary of changes, what was fixed, files changed, test results
- `base`: main branch (usually "main") - `base`: main branch (usually "main")
- `head`: "ai/<repo>-<issue_number>" - `head`: the branch name from your prompt
13. **Update labels.** Remove `status/in-progress` and add `status/pr-open` via the label MCP tools. 13. **Update labels.** Remove `status/in-progress` and add `status/pr-open` via the label MCP tools.

View File

@ -140,4 +140,4 @@ Or on failure:
- **Be proportionate.** Don't REQUEST_CHANGES for trivial style differences or subjective preferences. - **Be proportionate.** Don't REQUEST_CHANGES for trivial style differences or subjective preferences.
- **Stay in scope.** Review only the PR's changes. Don't flag pre-existing issues in surrounding code. - **Stay in scope.** Review only the PR's changes. Don't flag pre-existing issues in surrounding code.
- **Respect CLAUDE.md.** The project's CLAUDE.md is the source of truth for conventions. If the code follows CLAUDE.md, approve it even if you'd prefer a different style. - **Respect CLAUDE.md.** The project's CLAUDE.md is the source of truth for conventions. If the code follows CLAUDE.md, approve it even if you'd prefer a different style.
- **Consider the author.** PRs from `ai/` branches were created by the issue-worker agent. Be especially thorough on these — you're the safety net. - **Consider the author.** PRs from `issue/` branches were created by the issue-worker agent. Be especially thorough on these — you're the safety net.

File diff suppressed because one or more lines are too long

View File

@ -1,5 +1,5 @@
{ {
"fetchedAt": "2026-03-22T07:00:45.125Z", "fetchedAt": "2026-03-24T04:00:47.945Z",
"plugins": [ "plugins": [
{ {
"plugin": "code-review@claude-plugins-official", "plugin": "code-review@claude-plugins-official",

View File

@ -23,10 +23,10 @@
"playground@claude-plugins-official": [ "playground@claude-plugins-official": [
{ {
"scope": "user", "scope": "user",
"installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/playground/61c0597779bd", "installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/playground/15268f03d2f5",
"version": "61c0597779bd", "version": "15268f03d2f5",
"installedAt": "2026-02-18T19:51:28.422Z", "installedAt": "2026-02-18T19:51:28.422Z",
"lastUpdated": "2026-03-21T01:00:49.236Z", "lastUpdated": "2026-03-23T20:15:51.541Z",
"gitCommitSha": "261ce4fba4f2c314c490302158909a32e5889c88" "gitCommitSha": "261ce4fba4f2c314c490302158909a32e5889c88"
} }
], ],
@ -43,10 +43,10 @@
"frontend-design@claude-plugins-official": [ "frontend-design@claude-plugins-official": [
{ {
"scope": "user", "scope": "user",
"installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/frontend-design/61c0597779bd", "installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/frontend-design/15268f03d2f5",
"version": "61c0597779bd", "version": "15268f03d2f5",
"installedAt": "2026-02-22T05:53:45.091Z", "installedAt": "2026-02-22T05:53:45.091Z",
"lastUpdated": "2026-03-21T01:00:49.230Z", "lastUpdated": "2026-03-23T20:15:51.536Z",
"gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec" "gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec"
} }
], ],
@ -63,10 +63,10 @@
"session@agent-toolkit": [ "session@agent-toolkit": [
{ {
"scope": "user", "scope": "user",
"installPath": "/home/cal/.claude/plugins/cache/agent-toolkit/session/3.6.1", "installPath": "/home/cal/.claude/plugins/cache/agent-toolkit/session/3.7.0",
"version": "3.6.1", "version": "3.7.0",
"installedAt": "2026-03-18T23:37:09.034Z", "installedAt": "2026-03-18T23:37:09.034Z",
"lastUpdated": "2026-03-21T00:15:51.940Z", "lastUpdated": "2026-03-23T18:00:49.746Z",
"gitCommitSha": "8c6e15ce7c51ae53121ec12d8dceee3c8bf936c6" "gitCommitSha": "8c6e15ce7c51ae53121ec12d8dceee3c8bf936c6"
} }
], ],
@ -173,12 +173,12 @@
"atlassian@claude-plugins-official": [ "atlassian@claude-plugins-official": [
{ {
"scope": "project", "scope": "project",
"projectPath": "/home/cal/work/esb-monorepo",
"installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/atlassian/385c1469c567", "installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/atlassian/385c1469c567",
"version": "385c1469c567", "version": "385c1469c567",
"installedAt": "2026-03-21T04:15:01.344Z", "installedAt": "2026-03-21T04:15:01.344Z",
"lastUpdated": "2026-03-21T04:15:01.344Z", "lastUpdated": "2026-03-21T04:15:01.344Z",
"gitCommitSha": "385c1469c567399970e1d3fc23687a0312aa63dc", "gitCommitSha": "385c1469c567399970e1d3fc23687a0312aa63dc"
"projectPath": "/home/cal/work/esb-monorepo"
} }
] ]
} }

View File

@ -5,7 +5,7 @@
"url": "https://github.com/anthropics/claude-plugins-official.git" "url": "https://github.com/anthropics/claude-plugins-official.git"
}, },
"installLocation": "/home/cal/.claude/plugins/marketplaces/claude-plugins-official", "installLocation": "/home/cal/.claude/plugins/marketplaces/claude-plugins-official",
"lastUpdated": "2026-03-20T15:01:13.664Z" "lastUpdated": "2026-03-23T15:53:46.553Z"
}, },
"claude-code-plugins": { "claude-code-plugins": {
"source": { "source": {
@ -13,7 +13,7 @@
"repo": "anthropics/claude-code" "repo": "anthropics/claude-code"
}, },
"installLocation": "/home/cal/.claude/plugins/marketplaces/claude-code-plugins", "installLocation": "/home/cal/.claude/plugins/marketplaces/claude-code-plugins",
"lastUpdated": "2026-03-22T07:00:48.599Z" "lastUpdated": "2026-03-24T04:01:11.681Z"
}, },
"agent-toolkit": { "agent-toolkit": {
"source": { "source": {
@ -21,7 +21,7 @@
"repo": "St0nefish/agent-toolkit" "repo": "St0nefish/agent-toolkit"
}, },
"installLocation": "/home/cal/.claude/plugins/marketplaces/agent-toolkit", "installLocation": "/home/cal/.claude/plugins/marketplaces/agent-toolkit",
"lastUpdated": "2026-03-22T05:00:49.760Z", "lastUpdated": "2026-03-24T03:30:48.201Z",
"autoUpdate": true "autoUpdate": true
}, },
"cal-claude-plugins": { "cal-claude-plugins": {
@ -30,7 +30,7 @@
"url": "https://git.manticorum.com/cal/claude-plugins.git" "url": "https://git.manticorum.com/cal/claude-plugins.git"
}, },
"installLocation": "/home/cal/.claude/plugins/marketplaces/cal-claude-plugins", "installLocation": "/home/cal/.claude/plugins/marketplaces/cal-claude-plugins",
"lastUpdated": "2026-03-21T04:01:23.285Z", "lastUpdated": "2026-03-23T17:31:33.371Z",
"autoUpdate": true "autoUpdate": true
} }
} }

@ -1 +1 @@
Subproject commit 266237bb258d111433f099d86d735bd9e780569e Subproject commit 070f1d7f7485084a5336c6635593482c22c4387d

@ -1 +1 @@
Subproject commit 61c0597779bd2d670dcd4fbbf37c66aa19eb2ce6 Subproject commit 15268f03d2f560b955584a283927ed175569678c

101
session-analysis/state.json Normal file
View File

@ -0,0 +1,101 @@
{
"version": 1,
"last_run": "2026-03-23T17:36:39Z",
"analyzed": {
"1e366762-a0e9-4620-b5d4-352b18bf4603": {
"project_slug": "-home-cal-work",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-23T17:33:09Z"
},
"16964e75-0eb6-4a9b-ab7c-a1e529ae6ff8": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-02-24T17:11:58Z"
},
"f1d7aeee-ea5e-4eb8-b335-1c300c17ae53": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-02-26T17:40:26Z"
},
"94e2f9b8-63a8-451d-9c49-3049793dab7a": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-02-27T12:57:30Z"
},
"b55a6ece-a2d4-40c8-82bf-ca26fbf321b4": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-04T16:11:14Z"
},
"a26ce1f8-67d1-46bf-8533-00d75a908e16": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-05T14:23:33Z"
},
"1f46a730-a955-444b-950b-cf5676cbed44": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-05T17:00:00Z"
},
"aef3ed6e-c0b8-4440-980a-5eb3fc066395": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-05T18:00:00Z"
},
"3be575a2-ccf7-419d-b593-28de38900ce6": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-05T19:00:00Z"
},
"0eb043e8-0c8f-496b-a002-d3df91156dbe": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-09T00:00:00Z"
},
"76dd16c3-7f35-49a4-8bfe-9ac796dd8ea3": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-10T00:00:00Z"
},
"a68078d0-88db-4306-bded-f493264760bc": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-11T00:00:00Z"
},
"d10a068b-d3dc-4da9-b799-9175834eaed5": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-12T00:00:00Z"
},
"2af0c500-0b5d-4755-9823-b12116704f2c": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-12T18:04:19Z"
},
"372b5724-f70d-4787-aaed-de6aa157a2dc": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-13T14:30:55Z"
},
"fb411708-ac74-4485-ab27-be9a46f0dba4": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-18T14:09:15Z"
},
"cefc895c-cc54-4802-a333-23fe9d249a51": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-19T15:34:10Z"
},
"583b5f8a-885a-4a0d-b62d-61744cd32cb7": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-19T17:09:10Z"
},
"74fe7722-54a8-473c-a710-b3efa615d515": {
"project_slug": "-home-cal-work-esb-monorepo",
"analyzed_at": "2026-03-23T17:36:39Z",
"session_started": "2026-03-20T14:20:17Z"
}
}
}

1
sessions/1794866.json Normal file
View File

@ -0,0 +1 @@
{"pid":1794866,"sessionId":"0fa5054d-b5c6-4499-b59c-9a0f8fae56f5","cwd":"/mnt/NV2/Development/paper-dynasty","startedAt":1774267485456}

1
sessions/1841495.json Normal file
View File

@ -0,0 +1 @@
{"pid":1841495,"sessionId":"d582937f-e7b1-4131-8046-993531618bc2","cwd":"/mnt/NV2/Development/claude-home","startedAt":1774271490814}

1
sessions/2073728.json Normal file
View File

@ -0,0 +1 @@
{"pid":2073728,"sessionId":"1e366762-a0e9-4620-b5d4-352b18bf4603","cwd":"/home/cal/work","startedAt":1774287133224}

1
sessions/2085347.json Normal file
View File

@ -0,0 +1 @@
{"pid":2085347,"sessionId":"c9852a3c-c4ff-4914-a22f-a6853f93d712","cwd":"/mnt/NV2/Development/major-domo","startedAt":1774287409057}

1
sessions/2369320.json Normal file
View File

@ -0,0 +1 @@
{"pid":2369320,"sessionId":"5296e222-0748-4619-acbe-b8c7e5b5f297","cwd":"/mnt/NV2/Development/cookbook","startedAt":1774303725948}

View File

@ -1 +0,0 @@
{"pid":579031,"sessionId":"61fc6bcf-3693-4320-8e96-d4d934dfa0a0","cwd":"/mnt/NV2/Development/claude-home","startedAt":1774067785547}

View File

@ -97,5 +97,6 @@
} }
}, },
"autoUpdatesChannel": "latest", "autoUpdatesChannel": "latest",
"skipDangerousModePermissionPrompt": true "skipDangerousModePermissionPrompt": true,
"voiceEnabled": true
} }

View File

@ -33,6 +33,13 @@ description: Paper Dynasty baseball card game management. USE WHEN user mentions
- "List all teams in season 5" - "List all teams in season 5"
- "Find active gauntlet runs" - "Find active gauntlet runs"
**Ecosystem & Cross-Project**:
- "PD status" / "ecosystem status" / "what needs work"
- "Show PD ecosystem status" / "What's the status across all projects?"
**Growth & Engagement**:
- "growth roadmap" / "engagement" / "user retention"
> **For deployment**, use the `deploy` skill instead. > **For deployment**, use the `deploy` skill instead.
--- ---
@ -274,5 +281,106 @@ $PD gauntlet list/teams/cleanup # Gauntlet operations
--- ---
**Last Updated**: 2026-02-14 ---
**Version**: 2.6 (Added live series workflow, PotM documentation for both data sources)
## Ecosystem Dashboard
Provides a cross-project view of all Paper Dynasty Gitea repos in a single terminal dashboard.
**Script**: `~/.claude/skills/paper-dynasty/scripts/ecosystem_status.sh`
**Trigger phrases**:
- "Show PD ecosystem status"
- "What's the status across all projects?"
- "PD status" / "ecosystem status" / "what needs work"
**Usage**:
```bash
# Requires GITEA_TOKEN in env (or auto-reads from gitea-mcp config)
~/.claude/skills/paper-dynasty/scripts/ecosystem_status.sh
```
**What it shows**:
- Open issue count per repo
- Open PR count per repo
- Latest commit SHA + date per repo
- Recent commits (last 3) per repo with author and message
- Open issue titles grouped by repo (with labels)
- Open PR titles grouped by repo (with branch info)
- Cross-repo totals
**Repos covered**: paper-dynasty-database, paper-dynasty-discord, paper-dynasty-card-creation, paper-dynasty-website
**Auth**: Uses `GITEA_TOKEN` env var. If unset, attempts to read from `~/.config/claude-code/mcp-servers/gitea-mcp.json`.
---
## Initiative Tracker (`pd-plan`)
Local SQLite database tracking cross-project initiatives, priorities, and status.
**CLI**: `python ~/.claude/skills/paper-dynasty/plan/cli.py [command]`
**Database**: `~/.claude/skills/paper-dynasty/plan/initiatives.db`
**Trigger phrases**:
- "what should I work on" / "what's the priority"
- "initiative status" / "pd-plan" / "show priorities"
- "update initiative" / "mark done"
**Quick reference**:
```bash
PDP="python ~/.claude/skills/paper-dynasty/plan/cli.py"
$PDP summary # Dashboard — run at session start
$PDP list # All active initiatives
$PDP list --phase 1 # Phase 1 only
$PDP list --repo discord # Filter by repo
$PDP next # Highest priority non-blocked item
$PDP next --repo discord # Next for a specific repo
$PDP show 1 # Full details + activity log
$PDP add "Title" --phase 1 --priority 20 --impact retention --size M --repos discord
$PDP update 3 --status in_progress --actor pd-discord
$PDP update 3 --note "Merged 8 PRs" --actor pd-ops
$PDP update 3 --link "discord#104" # Append linked issue
$PDP done 3 --actor pd-ops # Mark complete
$PDP list --json # Machine-readable output
```
**Session startup**: Always run `pd-plan summary` at the start of a Paper Dynasty session to understand current priorities.
---
## Growth Roadmap
High-level roadmap for Paper Dynasty player growth, engagement, and retention strategies.
**File**: `/mnt/NV2/Development/paper-dynasty/ROADMAP.md`
**Trigger phrases**:
- "growth roadmap" / "engagement" / "user retention"
- "what's planned" / "next features"
Load the roadmap file for context before discussing growth priorities, feature planning, or retention strategies. Use `pd-plan` for current status of specific initiatives.
---
## Specialized Agents
Dispatch work to these agents for their respective domains. Do not do their work inline — launch them explicitly.
| Agent | Model | Domain | Dispatch When |
|-------|-------|--------|---------------|
| `pd-database` | Opus | Database/API | Schema changes, endpoints, migrations, data model |
| `pd-discord` | Opus | Discord bot | Commands, gameplay engine, bot UX |
| `pd-cards` | Opus | Card pipeline | Card generation, ratings, scouting, rendering |
| `pd-growth` | Opus | Product growth | Engagement, retention, roadmap prioritization |
| `pd-ops` | Sonnet | Release ops | Merging PRs, deploys, branch cleanup, process |
PO agents (Opus) decide **what** to build. `pd-ops` ensures it **ships correctly**. Implementation is delegated to `engineer`, `issue-worker`, or `swarm-coder`.
**How to dispatch**: Mention the agent name explicitly, e.g. "use the pd-cards agent to regenerate scouting" or "dispatch to pd-database for this migration".
---
**Last Updated**: 2026-03-22
**Version**: 2.8 (Added pd-plan initiative tracker, pd-ops agent, updated agent table with models)

1028
skills/paper-dynasty/plan/cli.py Executable file

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,5 @@
Metadata-Version: 2.4
Name: pd-plan
Version: 1.0.0
Summary: Paper Dynasty initiative tracker — local SQLite CLI for cross-project priorities
Requires-Python: >=3.10

View File

@ -0,0 +1,7 @@
cli.py
pyproject.toml
pd_plan.egg-info/PKG-INFO
pd_plan.egg-info/SOURCES.txt
pd_plan.egg-info/dependency_links.txt
pd_plan.egg-info/entry_points.txt
pd_plan.egg-info/top_level.txt

View File

@ -0,0 +1,2 @@
[console_scripts]
pd-plan = cli:main

View File

@ -0,0 +1 @@
cli

View File

@ -0,0 +1,12 @@
[project]
name = "pd-plan"
version = "1.0.0"
description = "Paper Dynasty initiative tracker — local SQLite CLI for cross-project priorities"
requires-python = ">=3.10"
[project.scripts]
pd-plan = "cli:main"
[build-system]
requires = ["setuptools>=68.0"]
build-backend = "setuptools.build_meta"

View File

@ -0,0 +1,273 @@
#!/usr/bin/env bash
# ecosystem_status.sh — Paper Dynasty cross-project dashboard
# Usage: GITEA_TOKEN=<token> ./ecosystem_status.sh
# or: ./ecosystem_status.sh (auto-reads from gitea-mcp config if available)
set -euo pipefail
# ---------------------------------------------------------------------------
# Auth
# ---------------------------------------------------------------------------
if [[ -z "${GITEA_TOKEN:-}" ]]; then
# Try to pull token from the gitea-mcp config (standard claude-code location)
GITEA_MCP_CONFIG="${HOME}/.config/claude-code/mcp-servers/gitea-mcp.json"
if [[ -f "$GITEA_MCP_CONFIG" ]]; then
GITEA_TOKEN=$(python3 -c "
import json, sys, os
cfg_path = os.environ.get('GITEA_MCP_CONFIG', '')
try:
cfg = json.load(open(cfg_path))
env = cfg.get('env', {})
print(env.get('GITEA_TOKEN', env.get('GITEA_API_TOKEN', '')))
except Exception:
print('')
" 2>/dev/null)
fi
fi
if [[ -z "${GITEA_TOKEN:-}" ]]; then
echo "ERROR: GITEA_TOKEN not set and could not be read from gitea-mcp config." >&2
echo " Set it with: export GITEA_TOKEN=your-token" >&2
exit 1
fi
API_BASE="https://git.manticorum.com/api/v1"
AUTH_HEADER="Authorization: token ${GITEA_TOKEN}"
REPOS=(
"cal/paper-dynasty-database"
"cal/paper-dynasty-discord"
"cal/paper-dynasty-card-creation"
"cal/paper-dynasty-website"
"cal/paper-dynasty-gameplay-webapp"
"cal/paper-dynasty-apiproxy"
)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
gitea_get() {
# gitea_get <endpoint-path> — returns JSON or "null" on error
curl -sf -H "$AUTH_HEADER" -H "Content-Type: application/json" \
"${API_BASE}/${1}" 2>/dev/null || echo "null"
}
count_items() {
local json="$1"
if [[ "$json" == "null" || -z "$json" ]]; then
echo "?"
return
fi
python3 -c "
import json,sys
data=json.loads(sys.argv[1])
print(len(data) if isinstance(data,list) else '?')
" "$json" 2>/dev/null || echo "?"
}
# ---------------------------------------------------------------------------
# Banner
# ---------------------------------------------------------------------------
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
echo ""
echo "╔══════════════════════════════════════════════════════════╗"
echo "║ PAPER DYNASTY — ECOSYSTEM STATUS DASHBOARD ║"
printf "║ %-56s ║\n" "$TIMESTAMP"
echo "╚══════════════════════════════════════════════════════════╝"
# ---------------------------------------------------------------------------
# Per-repo summary table
# ---------------------------------------------------------------------------
echo ""
printf "%-36s %6s %5s %s\n" "REPOSITORY" "ISSUES" "PRs" "LATEST COMMIT"
printf "%-36s %6s %5s %s\n" "────────────────────────────────────" "──────" "─────" "────────────────────────────────"
TOTAL_ISSUES=0
TOTAL_PRS=0
declare -A ALL_ISSUES_JSON
declare -A ALL_PRS_JSON
for REPO in "${REPOS[@]}"; do
SHORT_NAME="${REPO#cal/}"
ISSUES_JSON=$(gitea_get "repos/${REPO}/issues?type=issues&state=open&limit=50")
ISSUE_COUNT=$(count_items "$ISSUES_JSON")
ALL_ISSUES_JSON["$REPO"]="$ISSUES_JSON"
PRS_JSON=$(gitea_get "repos/${REPO}/pulls?state=open&limit=50")
PR_COUNT=$(count_items "$PRS_JSON")
ALL_PRS_JSON["$REPO"]="$PRS_JSON"
COMMITS_JSON=$(gitea_get "repos/${REPO}/commits?limit=1")
LATEST_SHA="n/a"
LATEST_DATE=""
if [[ "$COMMITS_JSON" != "null" && -n "$COMMITS_JSON" ]]; then
LATEST_SHA=$(python3 -c "
import json,sys
data=json.loads(sys.argv[1])
if isinstance(data,list) and data:
print(data[0].get('sha','')[:7])
else:
print('n/a')
" "$COMMITS_JSON" 2>/dev/null || echo "n/a")
LATEST_DATE=$(python3 -c "
import json,sys
data=json.loads(sys.argv[1])
if isinstance(data,list) and data:
ts=data[0].get('commit',{}).get('committer',{}).get('date','')
print(ts[:10] if ts else '')
else:
print('')
" "$COMMITS_JSON" 2>/dev/null || echo "")
fi
if [[ "$ISSUE_COUNT" =~ ^[0-9]+$ ]]; then
TOTAL_ISSUES=$((TOTAL_ISSUES + ISSUE_COUNT))
fi
if [[ "$PR_COUNT" =~ ^[0-9]+$ ]]; then
TOTAL_PRS=$((TOTAL_PRS + PR_COUNT))
fi
COMMIT_LABEL="${LATEST_SHA}${LATEST_DATE:+ [${LATEST_DATE}]}"
printf "%-36s %6s %5s %s\n" "$SHORT_NAME" "$ISSUE_COUNT" "$PR_COUNT" "$COMMIT_LABEL"
done
echo ""
printf " TOTALS: %d open issues, %d open PRs across %d repos\n" \
"$TOTAL_ISSUES" "$TOTAL_PRS" "${#REPOS[@]}"
# ---------------------------------------------------------------------------
# Recent commits — last 3 per repo
# ---------------------------------------------------------------------------
echo ""
echo "═══════════════════════════════════════════════════════════════"
echo " RECENT COMMITS (last 3 per repo)"
echo "═══════════════════════════════════════════════════════════════"
for REPO in "${REPOS[@]}"; do
SHORT_NAME="${REPO#cal/}"
echo ""
echo "${SHORT_NAME}"
COMMITS_JSON=$(gitea_get "repos/${REPO}/commits?limit=3")
if [[ "$COMMITS_JSON" == "null" || -z "$COMMITS_JSON" ]]; then
echo " (could not fetch commits)"
continue
fi
python3 -c "
import json, sys
data = json.loads(sys.argv[1])
if not isinstance(data, list) or not data:
print(' (no commits)')
sys.exit(0)
for c in data:
sha = c.get('sha', '')[:7]
msg = c.get('commit', {}).get('message', '').split('\n')[0][:58]
ts = c.get('commit', {}).get('committer', {}).get('date', '')[:10]
author = c.get('commit', {}).get('committer', {}).get('name', 'unknown')[:16]
print(f' {sha} {ts} {author:<16} {msg}')
" "$COMMITS_JSON"
done
# ---------------------------------------------------------------------------
# Open issues detail
# ---------------------------------------------------------------------------
echo ""
echo "═══════════════════════════════════════════════════════════════"
echo " OPEN ISSUES"
echo "═══════════════════════════════════════════════════════════════"
FOUND_ISSUES=false
for REPO in "${REPOS[@]}"; do
SHORT_NAME="${REPO#cal/}"
ISSUES_JSON="${ALL_ISSUES_JSON[$REPO]}"
ISSUE_COUNT=$(count_items "$ISSUES_JSON")
if [[ "$ISSUE_COUNT" == "0" || "$ISSUE_COUNT" == "?" ]]; then
continue
fi
echo ""
echo "${SHORT_NAME} (${ISSUE_COUNT} open)"
FOUND_ISSUES=true
python3 -c "
import json, sys
data = json.loads(sys.argv[1])
if not isinstance(data, list):
print(' (error reading issues)')
sys.exit(0)
for i in data[:10]:
num = i.get('number', '?')
title = i.get('title', '(no title)')[:65]
labels = ', '.join(l.get('name','') for l in i.get('labels',[]))
lstr = f' [{labels}]' if labels else ''
print(f' #{num:<4} {title}{lstr}')
if len(data) > 10:
print(f' ... and {len(data)-10} more')
" "$ISSUES_JSON"
done
if [[ "$FOUND_ISSUES" == "false" ]]; then
echo ""
echo " (no open issues across all repos)"
fi
# ---------------------------------------------------------------------------
# Open PRs detail
# ---------------------------------------------------------------------------
echo ""
echo "═══════════════════════════════════════════════════════════════"
echo " OPEN PULL REQUESTS"
echo "═══════════════════════════════════════════════════════════════"
FOUND_PRS=false
for REPO in "${REPOS[@]}"; do
SHORT_NAME="${REPO#cal/}"
PRS_JSON="${ALL_PRS_JSON[$REPO]}"
PR_COUNT=$(count_items "$PRS_JSON")
if [[ "$PR_COUNT" == "0" || "$PR_COUNT" == "?" ]]; then
continue
fi
echo ""
echo "${SHORT_NAME} (${PR_COUNT} open)"
FOUND_PRS=true
python3 -c "
import json, sys
data = json.loads(sys.argv[1])
if not isinstance(data, list):
print(' (error reading PRs)')
sys.exit(0)
for pr in data:
num = pr.get('number', '?')
title = pr.get('title', '(no title)')[:65]
head = pr.get('head', {}).get('label', '')
base = pr.get('base', {}).get('label', '')
print(f' #{num:<4} {title}')
if head and base:
print(f' {head} → {base}')
" "$PRS_JSON"
done
if [[ "$FOUND_PRS" == "false" ]]; then
echo ""
echo " (no open PRs across all repos)"
fi
echo ""
echo "═══════════════════════════════════════════════════════════════"
echo " Done. Gitea: https://git.manticorum.com/cal"
echo "═══════════════════════════════════════════════════════════════"
echo ""

View File

@ -3,140 +3,40 @@
Generate summary report for Paper Dynasty card update Generate summary report for Paper Dynasty card update
Collects statistics and notable changes for release notes. Collects statistics and notable changes for release notes.
Uses the Paper Dynasty API instead of direct database access.
Usage: Usage:
python generate_summary.py <database_path> [--previous-db <path>] python generate_summary.py [--cardset-id 24] [--env prod]
""" """
import sqlite3
import sys import sys
import json import argparse
from pathlib import Path from pathlib import Path
from datetime import datetime from datetime import datetime
from typing import List, Dict, Optional, Tuple from typing import List, Dict, Tuple
sys.path.insert(0, str(Path(__file__).parent.parent))
from api_client import PaperDynastyAPI
RARITY_TIERS = { def get_card_counts(api: PaperDynastyAPI, cardset_id: int) -> Dict[str, int]:
"Reserve": 1, """Get total card counts from the API"""
"Replacement": 2, batting = api.get("battingcards", params=[("cardset_id", cardset_id)])
"Starter": 3, pitching = api.get("pitchingcards", params=[("cardset_id", cardset_id)])
"All-Star": 4, b_count = batting.get("count", 0)
"MVP": 5, p_count = pitching.get("count", 0)
"Hall of Fame": 6 return {"batting": b_count, "pitching": p_count, "total": b_count + p_count}
}
def get_card_counts(cursor: sqlite3.Cursor) -> Dict[str, int]: def get_player_count(api: PaperDynastyAPI, cardset_id: int) -> int:
"""Get total card counts""" """Get total player count for the cardset from the API"""
batting = cursor.execute("SELECT COUNT(*) FROM batting_cards").fetchone()[0]
pitching = cursor.execute("SELECT COUNT(*) FROM pitching_cards").fetchone()[0]
return {"batting": batting, "pitching": pitching, "total": batting + pitching}
def get_new_players(cursor: sqlite3.Cursor, since_date: Optional[str] = None) -> int:
"""Count new players added since date"""
if not since_date:
# Default to last 7 days
query = """
SELECT COUNT(DISTINCT player_name)
FROM (
SELECT player_name, created_date FROM batting_cards
WHERE created_date >= date('now', '-7 days')
UNION
SELECT player_name, created_date FROM pitching_cards
WHERE created_date >= date('now', '-7 days')
)
"""
else:
query = f"""
SELECT COUNT(DISTINCT player_name)
FROM (
SELECT player_name FROM batting_cards
WHERE created_date >= '{since_date}'
UNION
SELECT player_name FROM pitching_cards
WHERE created_date >= '{since_date}'
)
"""
try: try:
return cursor.execute(query).fetchone()[0] players = api.list_players(cardset_id=cardset_id)
except sqlite3.OperationalError: return len(players)
# created_date column might not exist except Exception:
return 0 return 0
def get_rarity_changes(
current_cursor: sqlite3.Cursor,
previous_db_path: Optional[Path] = None,
threshold: int = 2
) -> Tuple[List[Dict], List[Dict]]:
"""
Compare rarity changes between current and previous database.
Returns (upgrades, downgrades) where each is a list of dicts with:
- player_name
- card_id
- old_rarity
- new_rarity
- change (tier difference)
"""
if not previous_db_path or not previous_db_path.exists():
return [], []
prev_conn = sqlite3.connect(previous_db_path)
prev_cursor = prev_conn.cursor()
upgrades = []
downgrades = []
# Compare batting cards
query = """
SELECT
c.player_name,
c.card_id,
p.rarity as old_rarity,
c.rarity as new_rarity
FROM batting_cards c
JOIN prev.batting_cards p ON c.card_id = p.card_id
WHERE c.rarity != p.rarity
"""
try:
current_cursor.execute("ATTACH DATABASE ? AS prev", (str(previous_db_path),))
changes = current_cursor.execute(query).fetchall()
for name, card_id, old_rarity, new_rarity in changes:
old_tier = RARITY_TIERS.get(old_rarity, 0)
new_tier = RARITY_TIERS.get(new_rarity, 0)
change = new_tier - old_tier
if abs(change) >= threshold:
record = {
"player_name": name,
"card_id": card_id,
"old_rarity": old_rarity,
"new_rarity": new_rarity,
"change": change
}
if change > 0:
upgrades.append(record)
else:
downgrades.append(record)
current_cursor.execute("DETACH DATABASE prev")
except sqlite3.OperationalError as e:
print(f"Warning: Could not compare rarity changes: {e}", file=sys.stderr)
prev_conn.close()
# Sort by magnitude of change
upgrades.sort(key=lambda x: x['change'], reverse=True)
downgrades.sort(key=lambda x: x['change'])
return upgrades, downgrades
def get_date_range(card_creation_dir: Path) -> Tuple[str, str]: def get_date_range(card_creation_dir: Path) -> Tuple[str, str]:
"""Extract date range from retrosheet_data.py""" """Extract date range from retrosheet_data.py"""
retrosheet_file = card_creation_dir / "retrosheet_data.py" retrosheet_file = card_creation_dir / "retrosheet_data.py"
@ -160,11 +60,11 @@ def get_date_range(card_creation_dir: Path) -> Tuple[str, str]:
def generate_markdown_summary( def generate_markdown_summary(
counts: Dict[str, int], counts: Dict[str, int],
new_players: int, player_count: int,
upgrades: List[Dict],
downgrades: List[Dict],
date_range: Tuple[str, str], date_range: Tuple[str, str],
csv_files: List[str] csv_files: List[str],
cardset_id: int,
env: str,
) -> str: ) -> str:
"""Generate markdown summary report""" """Generate markdown summary report"""
@ -176,44 +76,14 @@ def generate_markdown_summary(
"", "",
"## Overview", "## Overview",
f"- **Total Cards**: {counts['batting']} batting, {counts['pitching']} pitching", f"- **Total Cards**: {counts['batting']} batting, {counts['pitching']} pitching",
f"- **New Players**: {new_players}", f"- **Total Players**: {player_count}",
f"- **Data Range**: {start_date} to {end_date}", f"- **Data Range**: {start_date} to {end_date}",
f"- **Cardset ID**: {cardset_id} ({env})",
"", "",
]
if upgrades or downgrades:
lines.append("## Notable Rarity Changes")
lines.append("")
if upgrades:
lines.append("### Upgrades")
for player in upgrades[:10]: # Max 10
tier_change = f"+{player['change']}" if player['change'] > 0 else str(player['change'])
lines.append(
f"- **{player['player_name']}** (ID: {player['card_id']}): "
f"{player['old_rarity']}{player['new_rarity']} ({tier_change} tiers)"
)
if len(upgrades) > 10:
lines.append(f"- *...and {len(upgrades) - 10} more*")
lines.append("")
if downgrades:
lines.append("### Downgrades")
for player in downgrades[:10]: # Max 10
tier_change = str(player['change']) # Already negative
lines.append(
f"- **{player['player_name']}** (ID: {player['card_id']}): "
f"{player['old_rarity']}{player['new_rarity']} ({tier_change} tiers)"
)
if len(downgrades) > 10:
lines.append(f"- *...and {len(downgrades) - 10} more*")
lines.append("")
lines.extend([
"## Files Generated", "## Files Generated",
"- ✅ Card images uploaded to S3", "- ✅ Card images uploaded to S3",
"- ✅ Scouting CSVs transferred to database server", "- ✅ Scouting CSVs transferred to database server",
]) ]
for csv_file in csv_files: for csv_file in csv_files:
lines.append(f" - {csv_file}") lines.append(f" - {csv_file}")
@ -233,28 +103,27 @@ def generate_markdown_summary(
def main(): def main():
if len(sys.argv) < 2: parser = argparse.ArgumentParser(
print("Usage: python generate_summary.py <database_path> [--previous-db <path>]") description="Generate summary report for Paper Dynasty card update"
sys.exit(1) )
parser.add_argument(
"--cardset-id", type=int, default=24,
help="Cardset ID to summarize (default: 24, the live cardset)"
)
parser.add_argument(
"--env", choices=["prod", "dev"], default="prod",
help="API environment (default: prod)"
)
args = parser.parse_args()
db_path = Path(sys.argv[1]) api = PaperDynastyAPI(environment=args.env)
previous_db = Path(sys.argv[3]) if len(sys.argv) > 3 and sys.argv[2] == '--previous-db' else None
if not db_path.exists(): # Collect data from API
print(f"❌ Database not found: {db_path}") counts = get_card_counts(api, args.cardset_id)
sys.exit(1) player_count = get_player_count(api, args.cardset_id)
# Connect to database # Get date range from retrosheet_data.py if present
conn = sqlite3.connect(db_path) card_creation_dir = Path("/mnt/NV2/Development/paper-dynasty/card-creation")
cursor = conn.cursor()
# Collect data
counts = get_card_counts(cursor)
new_players = get_new_players(cursor)
upgrades, downgrades = get_rarity_changes(cursor, previous_db, threshold=2)
# Get date range from retrosheet_data.py
card_creation_dir = db_path.parent
date_range = get_date_range(card_creation_dir) date_range = get_date_range(card_creation_dir)
# Get CSV files from scouting directory # Get CSV files from scouting directory
@ -262,7 +131,9 @@ def main():
csv_files = sorted([f.name for f in scouting_dir.glob("*.csv")]) if scouting_dir.exists() else [] csv_files = sorted([f.name for f in scouting_dir.glob("*.csv")]) if scouting_dir.exists() else []
# Generate summary # Generate summary
summary = generate_markdown_summary(counts, new_players, upgrades, downgrades, date_range, csv_files) summary = generate_markdown_summary(
counts, player_count, date_range, csv_files, args.cardset_id, args.env
)
# Print to stdout # Print to stdout
print(summary) print(summary)
@ -273,8 +144,6 @@ def main():
output_file.write_text(summary) output_file.write_text(summary)
print(f"\n✅ Summary saved to: {output_file}", file=sys.stderr) print(f"\n✅ Summary saved to: {output_file}", file=sys.stderr)
conn.close()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -296,15 +296,14 @@ class SmokeTestRunner:
def run_all(self, mode: str = "quick"): def run_all(self, mode: str = "quick"):
"""Run smoke test checks. mode='quick' for core, 'full' for everything.""" """Run smoke test checks. mode='quick' for core, 'full' for everything."""
base = self.api.base_url base = self.api.base_url
t = 10 if mode == "quick" else 30 # quick should be fast
# Pre-fetch IDs for by-ID lookups (full mode only) # Pre-fetch IDs for by-ID lookups (full mode only)
if mode == "full": if mode == "full":
team_id = self._fetch_id("teams", params=[("limit", 1)]) team_id = self._fetch_id("teams", params=[("limit", 1)])
player_id = self._fetch_id("players", params=[("limit", 1)]) player_id = self._fetch_id("players", params=[("limit", 1)])
card_id = self._fetch_id("cards", params=[("limit", 1)]) card_id = self._fetch_id("cards", params=[("limit", 1)])
game_id = self._fetch_id("games", params=[("limit", 1)]) game_id = self._fetch_id("games")
result_id = self._fetch_id("results", params=[("limit", 1)]) result_id = self._fetch_id("results")
track_id = self._fetch_id("evolution/tracks", requires_auth=True) track_id = self._fetch_id("evolution/tracks", requires_auth=True)
else: else:
team_id = player_id = card_id = game_id = result_id = track_id = None team_id = player_id = card_id = game_id = result_id = track_id = None
@ -380,14 +379,11 @@ class SmokeTestRunner:
expect_list=True, expect_list=True,
min_count=1, min_count=1,
), ),
self.check( self.check("Events", "economy", "events", expect_list=True),
"Events", "economy", "events", params=[("limit", 5)], expect_list=True
),
self.check( self.check(
"Scout opportunities", "Scout opportunities",
"scouting", "scouting",
"scout_opportunities", "scout_opportunities",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
@ -493,7 +489,6 @@ class SmokeTestRunner:
"Batting card ratings", "Batting card ratings",
"cards", "cards",
"battingcardratings", "battingcardratings",
params=[("limit", 5)],
expect_list=True, expect_list=True,
requires_auth=True, requires_auth=True,
), ),
@ -501,7 +496,6 @@ class SmokeTestRunner:
"Pitching card ratings", "Pitching card ratings",
"cards", "cards",
"pitchingcardratings", "pitchingcardratings",
params=[("limit", 5)],
expect_list=True, expect_list=True,
requires_auth=True, requires_auth=True,
), ),
@ -509,7 +503,6 @@ class SmokeTestRunner:
"Card positions", "Card positions",
"cards", "cards",
"cardpositions", "cardpositions",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
*( *(
@ -541,7 +534,6 @@ class SmokeTestRunner:
"Games list", "Games list",
"games", "games",
"games", "games",
params=[("limit", 5)],
expect_list=True, expect_list=True,
min_count=1, min_count=1,
), ),
@ -549,7 +541,6 @@ class SmokeTestRunner:
"Results list", "Results list",
"games", "games",
"results", "results",
params=[("limit", 5)],
expect_list=True, expect_list=True,
min_count=1, min_count=1,
), ),
@ -619,14 +610,12 @@ class SmokeTestRunner:
"Rewards", "Rewards",
"economy", "economy",
"rewards", "rewards",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
"Game rewards", "Game rewards",
"economy", "economy",
"gamerewards", "gamerewards",
params=[("limit", 5)],
expect_list=True, expect_list=True,
min_count=1, min_count=1,
), ),
@ -634,28 +623,24 @@ class SmokeTestRunner:
"Gauntlet rewards", "Gauntlet rewards",
"economy", "economy",
"gauntletrewards", "gauntletrewards",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
"Gauntlet runs", "Gauntlet runs",
"economy", "economy",
"gauntletruns", "gauntletruns",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
"Awards", "Awards",
"economy", "economy",
"awards", "awards",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
"Notifications", "Notifications",
"economy", "economy",
"notifs", "notifs",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
# Scouting extras # Scouting extras
@ -663,21 +648,18 @@ class SmokeTestRunner:
"Scout claims", "Scout claims",
"scouting", "scouting",
"scout_claims", "scout_claims",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
"MLB players", "MLB players",
"scouting", "scouting",
"mlbplayers", "mlbplayers",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
"Paperdex", "Paperdex",
"scouting", "scouting",
"paperdex", "paperdex",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
*( *(
@ -699,14 +681,12 @@ class SmokeTestRunner:
"Batting stats", "Batting stats",
"stats", "stats",
"batstats", "batstats",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(
"Pitching stats", "Pitching stats",
"stats", "stats",
"pitstats", "pitstats",
params=[("limit", 5)],
expect_list=True, expect_list=True,
), ),
self.check( self.check(

View File

@ -2,30 +2,33 @@
""" """
Database Validation Script for Paper Dynasty Card Generation Database Validation Script for Paper Dynasty Card Generation
Checks for common errors in card database before uploading: Checks for common errors in card data via the API before deployment:
- Negative groundball_b values (causes gameplay crashes) - Missing batting or pitching cards for a cardset
- Invalid percentage ranges (>100 or <0) - Players with no corresponding card record
- NULL values in required fields - Rarity distribution sanity check
Usage: Usage:
python validate_database.py <database_path> python validate_database.py [--cardset-id 24] [--env prod]
""" """
import sqlite3
import sys import sys
import argparse
from pathlib import Path from pathlib import Path
from typing import List, Tuple, Dict from typing import List, Dict
sys.path.insert(0, str(Path(__file__).parent.parent))
from api_client import PaperDynastyAPI
class ValidationError: class ValidationError:
def __init__(self, table: str, issue: str, count: int, examples: List[Dict]): def __init__(self, entity: str, issue: str, count: int, examples: List[str]):
self.table = table self.entity = entity
self.issue = issue self.issue = issue
self.count = count self.count = count
self.examples = examples self.examples = examples
def __str__(self): def __str__(self):
lines = [f"{self.table}: {self.issue} ({self.count} records)"] lines = [f"{self.entity}: {self.issue} ({self.count} records)"]
for example in self.examples[:5]: # Show max 5 examples for example in self.examples[:5]: # Show max 5 examples
lines.append(f" - {example}") lines.append(f" - {example}")
if self.count > 5: if self.count > 5:
@ -33,113 +36,109 @@ class ValidationError:
return "\n".join(lines) return "\n".join(lines)
def validate_batting_cards(cursor: sqlite3.Cursor) -> List[ValidationError]: def get_card_counts(api: PaperDynastyAPI, cardset_id: int) -> Dict[str, int]:
"""Validate batting cards table""" """Get total card counts for the cardset from the API"""
batting = api.get("battingcards", params=[("cardset_id", cardset_id)])
pitching = api.get("pitchingcards", params=[("cardset_id", cardset_id)])
return {
"batting": batting.get("count", 0),
"pitching": pitching.get("count", 0),
}
def validate_card_counts(api: PaperDynastyAPI, cardset_id: int) -> List[ValidationError]:
"""Check that batting and pitching cards exist for the cardset"""
errors = []
counts = get_card_counts(api, cardset_id)
if counts["batting"] == 0:
errors.append(ValidationError(
"battingcards", f"No batting cards found for cardset {cardset_id}", 0, []
))
if counts["pitching"] == 0:
errors.append(ValidationError(
"pitchingcards", f"No pitching cards found for cardset {cardset_id}", 0, []
))
return errors
def validate_player_coverage(api: PaperDynastyAPI, cardset_id: int) -> List[ValidationError]:
"""Check that every player in the cardset has at least one card"""
errors = [] errors = []
# Check 1: Negative groundball_b values try:
cursor.execute(""" players = api.list_players(cardset_id=cardset_id)
SELECT player_name, card_id, groundball_b, rarity except Exception as e:
FROM batting_cards errors.append(ValidationError("players", f"Could not fetch players: {e}", 0, []))
WHERE groundball_b < 0 return errors
ORDER BY groundball_b ASC
LIMIT 10 if not players:
""") errors.append(ValidationError(
negative_gb = cursor.fetchall() "players", f"No players found for cardset {cardset_id}", 0, []
if negative_gb: ))
count = cursor.execute("SELECT COUNT(*) FROM batting_cards WHERE groundball_b < 0").fetchone()[0] return errors
batting_data = api.get("battingcards", params=[("cardset_id", cardset_id)])
pitching_data = api.get("pitchingcards", params=[("cardset_id", cardset_id)])
batting_player_ids = {
c["player"]["player_id"]
for c in batting_data.get("cards", [])
if c.get("player")
}
pitching_player_ids = {
c["player"]["player_id"]
for c in pitching_data.get("cards", [])
if c.get("player")
}
all_card_player_ids = batting_player_ids | pitching_player_ids
uncovered = [
p for p in players
if p["player_id"] not in all_card_player_ids
]
if uncovered:
examples = [ examples = [
f"{name} (ID: {card_id}, GB-B: {gb}, Rarity: {rarity})" f"{p.get('p_name', 'Unknown')} (ID: {p['player_id']}, rarity: {p.get('rarity', {}).get('name', '?') if isinstance(p.get('rarity'), dict) else p.get('rarity', '?')})"
for name, card_id, gb, rarity in negative_gb for p in uncovered
] ]
errors.append(ValidationError("batting_cards", "Negative groundball_b values", count, examples)) errors.append(ValidationError(
"players", "Players with no batting or pitching card", len(uncovered), examples
# Check 2: Invalid percentage ranges (0-100) ))
# Adjust column names based on actual schema
percentage_columns = ['strikeout', 'walk', 'homerun'] # Add more as needed
for col in percentage_columns:
try:
cursor.execute(f"""
SELECT player_name, card_id, {col}
FROM batting_cards
WHERE {col} < 0 OR {col} > 100
LIMIT 10
""")
invalid_pct = cursor.fetchall()
if invalid_pct:
count = cursor.execute(f"SELECT COUNT(*) FROM batting_cards WHERE {col} < 0 OR {col} > 100").fetchone()[0]
examples = [f"{name} (ID: {card_id}, {col}: {val})" for name, card_id, val in invalid_pct]
errors.append(ValidationError("batting_cards", f"Invalid {col} percentage", count, examples))
except sqlite3.OperationalError:
# Column might not exist - skip
pass
return errors return errors
def validate_pitching_cards(cursor: sqlite3.Cursor) -> List[ValidationError]:
"""Validate pitching cards table"""
errors = []
# Check: Invalid percentage ranges
percentage_columns = ['strikeout', 'walk', 'homerun'] # Add more as needed
for col in percentage_columns:
try:
cursor.execute(f"""
SELECT player_name, card_id, {col}
FROM pitching_cards
WHERE {col} < 0 OR {col} > 100
LIMIT 10
""")
invalid_pct = cursor.fetchall()
if invalid_pct:
count = cursor.execute(f"SELECT COUNT(*) FROM pitching_cards WHERE {col} < 0 OR {col} > 100").fetchone()[0]
examples = [f"{name} (ID: {card_id}, {col}: {val})" for name, card_id, val in invalid_pct]
errors.append(ValidationError("pitching_cards", f"Invalid {col} percentage", count, examples))
except sqlite3.OperationalError:
# Column might not exist - skip
pass
return errors
def get_card_counts(cursor: sqlite3.Cursor) -> Dict[str, int]:
"""Get total card counts"""
batting_count = cursor.execute("SELECT COUNT(*) FROM batting_cards").fetchone()[0]
pitching_count = cursor.execute("SELECT COUNT(*) FROM pitching_cards").fetchone()[0]
return {"batting": batting_count, "pitching": pitching_count}
def main(): def main():
if len(sys.argv) != 2: parser = argparse.ArgumentParser(
print("Usage: python validate_database.py <database_path>") description="Validate Paper Dynasty card data via the API"
sys.exit(1) )
parser.add_argument(
"--cardset-id", type=int, default=24,
help="Cardset ID to validate (default: 24, the live cardset)"
)
parser.add_argument(
"--env", choices=["prod", "dev"], default="prod",
help="API environment (default: prod)"
)
args = parser.parse_args()
db_path = Path(sys.argv[1]) api = PaperDynastyAPI(environment=args.env)
if not db_path.exists():
print(f"❌ Database not found: {db_path}")
sys.exit(1)
print(f"🔍 Validating database: {db_path}") print(f"🔍 Validating cardset {args.cardset_id} ({args.env})")
print() print()
# Connect to database counts = get_card_counts(api, args.cardset_id)
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# Get card counts
counts = get_card_counts(cursor)
print(f"📊 Total Cards:") print(f"📊 Total Cards:")
print(f" - Batting: {counts['batting']}") print(f" - Batting: {counts['batting']}")
print(f" - Pitching: {counts['pitching']}") print(f" - Pitching: {counts['pitching']}")
print() print()
# Run validations
all_errors = [] all_errors = []
all_errors.extend(validate_batting_cards(cursor)) all_errors.extend(validate_card_counts(api, args.cardset_id))
all_errors.extend(validate_pitching_cards(cursor)) all_errors.extend(validate_player_coverage(api, args.cardset_id))
# Report results
if all_errors: if all_errors:
print("❌ VALIDATION FAILED") print("❌ VALIDATION FAILED")
print() print()
@ -150,7 +149,7 @@ def main():
sys.exit(1) sys.exit(1)
else: else:
print("✅ VALIDATION PASSED") print("✅ VALIDATION PASSED")
print("No errors found in database") print("No errors found")
sys.exit(0) sys.exit(0)