Compare commits

..

6 Commits

Author SHA1 Message Date
Cal Corum
dd7c68c13a docs: sync KB — discord-browser-testing-workflow.md 2026-04-06 02:00:38 -05:00
Cal Corum
acb8fef084 docs: sync KB — database-deployment-guide.md,refractor-in-app-test-plan.md 2026-04-06 00:00:03 -05:00
Cal Corum
cacf4a9043 feat: add weekly Gitea disk cleanup Ansible playbook
Gitea LXC 225 hit 100% disk from accumulated Docker buildx volumes,
repo-archive cache, and journal logs. Adds automated weekly cleanup
managed by systemd timer on the Ansible controller (Wed 04:00 UTC).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 19:24:59 -05:00
Cal Corum
29a20fbe06 feat: add monthly Proxmox maintenance reboot automation (#26)
All checks were successful
Reindex Knowledge Base / reindex (push) Successful in 2s
Establishes a first-Sunday-of-the-month maintenance window orchestrated
by Ansible on LXC 304. Split into two playbooks to handle the self-reboot
paradox (the controller is a guest on the host being rebooted):

- monthly-reboot.yml: snapshots, tiered shutdown with per-guest polling,
  fire-and-forget host reboot
- post-reboot-startup.yml: controlled tiered startup with staggered delays,
  Pi-hole UDP DNS fix, validation, and snapshot cleanup

Also fixes onboot:1 on VM 109, LXC 221, LXC 223 and creates a recurring
Google Calendar event for the maintenance window.

Closes #26

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 23:33:59 -05:00
cal
fdc44acb28 Merge pull request 'chore: add --hosts test coverage and right-size VM 115 socket config' (#46) from chore/26-proxmox-monthly-maintenance-reboot into main 2026-04-04 00:35:31 +00:00
Cal Corum
48a804dda2 feat: right-size VM 115 config and add --hosts flag to audit script
All checks were successful
Auto-merge docs-only PRs / auto-merge-docs (pull_request) Successful in 2s
Reduce VM 115 (docker-sba) from 16 vCPUs (2×8) to 8 vCPUs (1×8) to
match actual workload (0.06 load/core). Add --hosts flag to
homelab-audit.sh for targeted post-change audits.

Closes #18

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 17:33:01 -05:00
8 changed files with 391 additions and 60 deletions

View File

@ -1,55 +0,0 @@
---
# Monthly Docker Prune — Deploy Cleanup Cron to All Docker Hosts
#
# Deploys /etc/cron.monthly/docker-prune to each VM running Docker.
# The script prunes stopped containers, unused images, and orphaned volumes
# older than 30 days (720h). Volumes labeled `keep` are exempt.
#
# Resolves accumulated disk waste from stopped containers and stale images.
# The `--filter "until=720h"` age gate prevents removing recently-pulled
# images that haven't started yet. `docker image prune -a` only removes
# images not referenced by any container (running or stopped), so the
# age filter adds an extra safety margin.
#
# Hosts: VM 106 (docker-home), VM 110 (discord-bots), VM 112 (databases-bots),
# VM 115 (docker-sba), VM 116 (docker-home-servers), manticore
#
# Controller: LXC 304 (ansible-controller) at 10.10.0.232
#
# Usage:
# # Dry run (shows what would change, skips writes)
# ansible-playbook /opt/ansible/playbooks/docker-prune.yml --check
#
# # Single host
# ansible-playbook /opt/ansible/playbooks/docker-prune.yml --limit docker-sba
#
# # All Docker hosts
# ansible-playbook /opt/ansible/playbooks/docker-prune.yml
#
# To undo: rm /etc/cron.monthly/docker-prune on target hosts
- name: Deploy Docker monthly prune cron to all Docker hosts
hosts: docker-home:discord-bots:databases-bots:docker-sba:docker-home-servers:manticore
become: true
tasks:
- name: Deploy docker-prune cron script
ansible.builtin.copy:
dest: /etc/cron.monthly/docker-prune
owner: root
group: root
mode: "0755"
content: |
#!/bin/bash
# Monthly Docker cleanup — deployed by Ansible (issue #29)
# Prunes stopped containers, unused images (>30 days), and orphaned volumes.
# Volumes labeled `keep` are exempt from volume pruning.
set -euo pipefail
docker container prune -f --filter "until=720h"
docker image prune -a -f --filter "until=720h"
docker volume prune -f --filter "label!=keep"
- name: Verify docker-prune script is executable
ansible.builtin.command: test -x /etc/cron.monthly/docker-prune
changed_when: false

View File

@ -0,0 +1,80 @@
---
# gitea-cleanup.yml — Weekly cleanup of Gitea server disk space
#
# Removes stale Docker buildx volumes, unused images, Gitea repo-archive
# cache, and vacuums journal logs to prevent disk exhaustion on LXC 225.
#
# Schedule: Weekly via systemd timer on LXC 304 (ansible-controller)
#
# Usage:
# ansible-playbook /opt/ansible/playbooks/gitea-cleanup.yml # full run
# ansible-playbook /opt/ansible/playbooks/gitea-cleanup.yml --check # dry run
- name: Gitea server disk cleanup
hosts: gitea
gather_facts: false
tasks:
- name: Check current disk usage
ansible.builtin.shell: df --output=pcent / | tail -1
register: disk_before
changed_when: false
- name: Display current disk usage
ansible.builtin.debug:
msg: "Disk usage before cleanup: {{ disk_before.stdout | trim }}"
- name: Clear Gitea repo-archive cache
ansible.builtin.find:
paths: /var/lib/gitea/data/repo-archive
file_type: any
register: repo_archive_files
- name: Remove repo-archive files
ansible.builtin.file:
path: "{{ item.path }}"
state: absent
loop: "{{ repo_archive_files.files }}"
loop_control:
label: "{{ item.path | basename }}"
when: repo_archive_files.files | length > 0
- name: Remove orphaned Docker buildx volumes
ansible.builtin.shell: |
volumes=$(docker volume ls -q --filter name=buildx_buildkit)
if [ -n "$volumes" ]; then
echo "$volumes" | xargs docker volume rm 2>&1
else
echo "No buildx volumes to remove"
fi
register: buildx_cleanup
changed_when: "'No buildx volumes' not in buildx_cleanup.stdout"
- name: Prune unused Docker images
ansible.builtin.command: docker image prune -af
register: image_prune
changed_when: "'Total reclaimed space: 0B' not in image_prune.stdout"
- name: Prune unused Docker volumes
ansible.builtin.command: docker volume prune -f
register: volume_prune
changed_when: "'Total reclaimed space: 0B' not in volume_prune.stdout"
- name: Vacuum journal logs to 500M
ansible.builtin.command: journalctl --vacuum-size=500M
register: journal_vacuum
changed_when: "'freed 0B' not in journal_vacuum.stderr"
- name: Check disk usage after cleanup
ansible.builtin.shell: df --output=pcent / | tail -1
register: disk_after
changed_when: false
- name: Display cleanup summary
ansible.builtin.debug:
msg: >-
Cleanup complete.
Disk: {{ disk_before.stdout | default('N/A') | trim }} → {{ disk_after.stdout | default('N/A') | trim }}.
Buildx: {{ (buildx_cleanup.stdout_lines | default(['N/A'])) | last }}.
Images: {{ (image_prune.stdout_lines | default(['N/A'])) | last }}.
Journal: {{ (journal_vacuum.stderr_lines | default(['N/A'])) | last }}.

View File

@ -5,7 +5,7 @@
# to collect system metrics, then generates a summary report.
#
# Usage:
# homelab-audit.sh [--output-dir DIR]
# homelab-audit.sh [--output-dir DIR] [--hosts label:ip,label:ip,...]
#
# Environment overrides:
# STUCK_PROC_CPU_WARN CPU% at which a D-state process is flagged (default: 10)
@ -29,7 +29,6 @@ LOAD_WARN=2.0
MEM_WARN=85
ZOMBIE_WARN=1
SWAP_WARN=512
HOSTS_FILTER="" # comma-separated host list from --hosts; empty = audit all
JSON_OUTPUT=0 # set to 1 by --json

View File

@ -93,6 +93,34 @@ else
fail "disk_usage" "expected 'N /path', got: '$result'"
fi
# --- --hosts flag parsing ---
echo ""
echo "=== --hosts argument parsing tests ==="
# Single host
input="vm-115:10.10.0.88"
IFS=',' read -ra entries <<<"$input"
label="${entries[0]%%:*}"
addr="${entries[0]#*:}"
if [[ "$label" == "vm-115" && "$addr" == "10.10.0.88" ]]; then
pass "--hosts single entry parsed: $label $addr"
else
fail "--hosts single" "expected 'vm-115 10.10.0.88', got: '$label $addr'"
fi
# Multiple hosts
input="vm-115:10.10.0.88,lxc-225:10.10.0.225"
IFS=',' read -ra entries <<<"$input"
label1="${entries[0]%%:*}"
addr1="${entries[0]#*:}"
label2="${entries[1]%%:*}"
addr2="${entries[1]#*:}"
if [[ "$label1" == "vm-115" && "$addr1" == "10.10.0.88" && "$label2" == "lxc-225" && "$addr2" == "10.10.0.225" ]]; then
pass "--hosts multi entry parsed: $label1 $addr1, $label2 $addr2"
else
fail "--hosts multi" "unexpected parse result"
fi
echo ""
echo "=== Results: $PASS passed, $FAIL failed ==="
((FAIL == 0))

View File

@ -178,7 +178,7 @@ When merging many PRs at once (e.g., batch pagination PRs), branch protection ru
| `LOG_LEVEL` | Logging verbosity (default: INFO) |
| `DATABASE_TYPE` | `postgresql` |
| `POSTGRES_HOST` | Container name of PostgreSQL |
| `POSTGRES_DB` | Database name (`pd_master`) |
| `POSTGRES_DB` | Database name `pd_master` (prod) / `paperdynasty_dev` (dev) |
| `POSTGRES_USER` | DB username |
| `POSTGRES_PASSWORD` | DB password |
@ -189,4 +189,6 @@ When merging many PRs at once (e.g., batch pagination PRs), branch protection ru
| Database API (prod) | `ssh akamai` | `pd_api` | 815 |
| Database API (dev) | `ssh pd-database` | `dev_pd_database` | 813 |
| PostgreSQL (prod) | `ssh akamai` | `pd_postgres` | 5432 |
| PostgreSQL (dev) | `ssh pd-database` | `pd_postgres` | 5432 |
| PostgreSQL (dev) | `ssh pd-database` | `sba_postgres` | 5432 |
**Dev database credentials:** container `sba_postgres`, database `paperdynasty_dev`, user `sba_admin`. Prod uses `pd_postgres`, database `pd_master`.

View File

@ -0,0 +1,170 @@
---
title: "Discord Bot Browser Testing via Playwright + CDP"
description: "Step-by-step workflow for automated Discord bot testing using Playwright connected to Brave browser via Chrome DevTools Protocol. Covers setup, slash command execution, and screenshot capture."
type: runbook
domain: paper-dynasty
tags: [paper-dynasty, discord, testing, playwright, automation]
---
# Discord Bot Browser Testing via Playwright + CDP
Automated testing of Paper Dynasty Discord bot commands by connecting Playwright to a running Brave browser instance with Discord open.
## Prerequisites
- Brave browser installed (`brave-browser-stable`)
- Playwright installed (`pip install playwright && playwright install chromium`)
- Discord logged in via browser (not desktop app)
- Discord bot running (locally via docker-compose or on remote host)
- Bot's `API_TOKEN` must match the target API environment
## Setup
### 1. Launch Brave with CDP enabled
Brave must be started with `--remote-debugging-port`. If Brave is already running, **kill it first** — otherwise the flag is ignored and the new process merges into the existing one.
```bash
killall brave && sleep 2 && brave-browser-stable --remote-debugging-port=9222 &
```
### 2. Verify CDP is responding
```bash
curl -s http://localhost:9222/json/version | python3 -m json.tool
```
Should return JSON with `Browser`, `webSocketDebuggerUrl`, etc.
### 3. Open Discord in browser
Navigate to `https://discord.com/channels/<server_id>/<channel_id>` in Brave.
**Paper Dynasty test server:**
- Server: Cals Test Server (`669356687294988350`)
- Channel: #pd-game-test (`982850262903451658`)
- URL: `https://discord.com/channels/669356687294988350/982850262903451658`
### 4. Verify bot is running with correct API token
```bash
# Check docker-compose.yml has the right API_TOKEN for the target environment
grep API_TOKEN /mnt/NV2/Development/paper-dynasty/discord-app/docker-compose.yml
# Dev API token lives on the dev host:
ssh pd-database "docker exec sba_postgres psql -U sba_admin -d paperdynasty_dev -c \"SELECT 1;\""
# Restart bot if token was changed:
cd /mnt/NV2/Development/paper-dynasty/discord-app && docker compose up -d
```
## Running Commands
### Find the Discord tab
```python
from playwright.sync_api import sync_playwright
import time
with sync_playwright() as p:
browser = p.chromium.connect_over_cdp('http://localhost:9222')
for ctx in browser.contexts:
for page in ctx.pages:
if 'discord' in page.url.lower():
print(f'Found: {page.url}')
break
browser.close()
```
### Execute a slash command and capture result
```python
from playwright.sync_api import sync_playwright
import time
def run_slash_command(command: str, wait_seconds: int = 5, screenshot_path: str = '/tmp/discord_result.png'):
"""
Type a slash command in Discord, select the top autocomplete option,
submit it, wait for the bot response, and take a screenshot.
"""
with sync_playwright() as p:
browser = p.chromium.connect_over_cdp('http://localhost:9222')
for ctx in browser.contexts:
for page in ctx.pages:
if 'discord' in page.url.lower():
msg_box = page.locator('[role="textbox"][data-slate-editor="true"]')
msg_box.click()
time.sleep(0.3)
# Type the command (delay simulates human typing for autocomplete)
msg_box.type(command, delay=80)
time.sleep(2)
# Tab selects the top autocomplete option
page.keyboard.press('Tab')
time.sleep(1)
# Enter submits the command
page.keyboard.press('Enter')
time.sleep(wait_seconds)
page.screenshot(path=screenshot_path)
print(f'Screenshot saved to {screenshot_path}')
break
browser.close()
# Example usage:
run_slash_command('/refractor status')
```
### Commands with parameters
After pressing Tab to select the command, Discord shows an options panel. To fill parameters:
1. The first parameter input is auto-focused after Tab
2. Type the value, then Tab to move to the next parameter
3. Press Enter when ready to submit
```python
# Example: /refractor status with tier filter
msg_box.type('/refractor status', delay=80)
time.sleep(2)
page.keyboard.press('Tab') # Select command from autocomplete
time.sleep(1)
# Now fill parameters if needed, or just submit
page.keyboard.press('Enter')
```
## Key Selectors
| Element | Selector |
|---------|----------|
| Message input box | `[role="textbox"][data-slate-editor="true"]` |
| Autocomplete popup | `[class*="autocomplete"]` |
## Gotchas
- **Brave must be killed before relaunch** — if an instance is already running, `--remote-debugging-port` is silently ignored
- **Bot token mismatch** — the bot's `API_TOKEN` in `docker-compose.yml` must match the target API (dev or prod). Symptoms: `{"detail":"Unauthorized"}` in bot logs
- **Viewport is None** — when connecting via CDP, `page.viewport_size` returns None. Use `page.evaluate('() => ({w: window.innerWidth, h: window.innerHeight})')` instead
- **Autocomplete timing** — typing too fast may not trigger Discord's autocomplete. The `delay=80` on `msg_box.type()` simulates human speed
- **Multiple bots** — if multiple bots register the same slash command (e.g. MantiTestBot and PucklTestBot), Tab selects the top option. Verify the correct bot name in the autocomplete popup before proceeding
## Test Plan Reference
The Refractor integration test plan is at:
`discord-app/tests/refractor-integration-test-plan.md`
Key test case groups:
- REF-01 to REF-06: Tier badges and display
- REF-10 to REF-15: Progress bars and filtering
- REF-40 to REF-42: Cross-command badges (card, roster)
- REF-70 to REF-72: Cross-command badge propagation (the current priority)
## Verified On
- **Date:** 2026-04-06
- **Browser:** Brave 146.0.7680.178 (Chromium-based)
- **Playwright:** Node.js driver via Python sync API
- **Bot:** MantiTestBot on Cals Test Server, #pd-game-test channel
- **API:** pddev.manticorum.com (dev environment)

View File

@ -0,0 +1,107 @@
---
title: "Refractor In-App Test Plan"
description: "Comprehensive manual test plan for the Refractor card evolution system — covers /refractor status, tier badges, post-game hooks, tier-up notifications, card art tiers, and known issues."
type: guide
domain: paper-dynasty
tags: [paper-dynasty, testing, refractor, discord, database]
---
# Refractor In-App Test Plan
Manual test plan for the Refractor (card evolution) system. All testing targets **dev** environment (`pddev.manticorum.com` / dev Discord bot).
## Prerequisites
- Dev bot running on `sba-bots`
- Dev API at `pddev.manticorum.com` (port 813)
- Team with seeded refractor data (team 31 from prior session)
- At least one game playable to trigger post-game hooks
---
## REF-10: `/refractor status` — Basic Display
| # | Test | Steps | Expected |
|---|---|---|---|
| 10 | No filters | `/refractor status` | Ephemeral embed with team branding, tier summary line, 10 cards sorted by tier DESC, pagination buttons if >10 cards |
| 11 | Card type filter | `/refractor status card_type:Batter` | Only batter cards shown, count matches |
| 12 | Tier filter | `/refractor status tier:T2—Refractor` | Only T2 cards, embed color changes to tier color |
| 13 | Progress filter | `/refractor status progress:Close to next tier` | Only cards >=80% to next threshold, fully evolved excluded |
| 14 | Combined filters | `/refractor status card_type:Batter tier:T1—Base Chrome` | Intersection of both filters |
| 15 | Empty result | `/refractor status tier:T4—Superfractor` (if none exist) | "No cards match your filters..." message with filter details |
## REF-20: `/refractor status` — Pagination
| # | Test | Steps | Expected |
|---|---|---|---|
| 20 | Page buttons appear | `/refractor status` with >10 cards | Prev/Next buttons visible |
| 21 | Next page | Click `Next >` | Page 2 shown, footer updates to "Page 2/N" |
| 22 | Prev page | From page 2, click `< Prev` | Back to page 1 |
| 23 | First page prev | On page 1, click `< Prev` | Nothing happens / stays on page 1 |
| 24 | Last page next | On last page, click `Next >` | Nothing happens / stays on last page |
| 25 | Button timeout | Wait 120s after command | Buttons become unresponsive |
| 26 | Wrong user clicks | Another user clicks buttons | Silently ignored |
## REF-30: Tier Badges in Card Embeds
| # | Test | Steps | Expected |
|---|---|---|---|
| 30 | T0 card display | View a T0 card via `/myteam` or `/roster` | No badge prefix, just player name |
| 31 | T1 badge | View a T1 card | Title shows `[BC] Player Name` |
| 32 | T2 badge | View a T2 card | Title shows `[R] Player Name` |
| 33 | T3 badge | View a T3 card | Title shows `[GR] Player Name` |
| 34 | T4 badge | View a T4 card (if exists) | Title shows `[SF] Player Name` |
| 35 | Badge in pack open | Open a pack with an evolved card | Badge appears in pack embed |
| 36 | API down gracefully | (hard to test) | Card displays normally with no badge, no error |
## REF-50: Post-Game Hook & Tier-Up Notifications
| # | Test | Steps | Expected |
|---|---|---|---|
| 50 | Game completes normally | Play a full game | No errors in bot logs; refractor evaluate-game fires after season-stats update |
| 51 | Tier-up notification | Play game where a card crosses a threshold | Embed in game channel: "Refractor Tier Up!", player name, tier name, correct color |
| 52 | No tier-up | Play game where no thresholds crossed | No refractor embed posted, game completes normally |
| 53 | Multiple tier-ups | Game where 2+ players tier up | One embed per tier-up, all posted |
| 54 | Auto-init new card | Play game with a card that has no RefractorCardState | State created automatically, player evaluated, no error |
| 55 | Superfractor notification | (may need forced data) | "SUPERFRACTOR!" title, teal color |
## REF-60: Card Art with Tiers (API-level)
| # | Test | Steps | Expected |
|---|---|---|---|
| 60 | T0 card image | `GET /api/v2/players/{id}/card-image?card_type=batting` | Base card, no tier styling |
| 61 | Tier override | `GET ...?card_type=batting&tier=2` | Refractor styling visible (border, diamond indicator) |
| 62 | Each tier visual | `?tier=1` through `?tier=4` | Correct border colors, diamond fill, header gradients per tier |
| 63 | Pitcher card | `?card_type=pitching&tier=2` | Tier styling applies correctly to pitcher layout |
## REF-70: Known Issues to Verify
| # | Issue | Check | Status |
|---|---|---|---|
| 70 | Superfractor embed says "Rating boosts coming in a future update!" | Verify — boosts ARE implemented now, text is stale | **Fix needed** |
| 71 | `on_timeout` doesn't edit message | Buttons stay visually active after 120s | **Known, low priority** |
| 72 | Card embed perf (1 API call per card) | Note latency on roster views with 10+ cards | **Monitor** |
| 73 | Season-stats failure kills refractor eval | Both in same try/except | **Known risk, verify logging** |
---
## API Endpoints Under Test
| Method | Endpoint | Used By |
|---|---|---|
| GET | `/api/v2/refractor/tracks` | Track listing |
| GET | `/api/v2/refractor/cards?team_id=X` | `/refractor status` command |
| GET | `/api/v2/refractor/cards/{card_id}` | Tier badge in card embeds |
| POST | `/api/v2/refractor/cards/{card_id}/evaluate` | Force re-evaluation |
| POST | `/api/v2/refractor/evaluate-game/{game_id}` | Post-game hook |
| GET | `/api/v2/teams/{team_id}/refractors` | Teams alias endpoint |
| GET | `/api/v2/players/{id}/card-image?tier=N` | Card art tier preview |
## Notification Embed Colors
| Tier | Name | Color |
|---|---|---|
| T1 | Base Chrome | Green (0x2ECC71) |
| T2 | Refractor | Gold (0xF1C40F) |
| T3 | Gold Refractor | Purple (0x9B59B6) |
| T4 | Superfractor | Teal (0x1ABC9C) |

View File

@ -12,5 +12,5 @@ ostype: l26
scsi0: local-lvm:vm-115-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=19be98ee-f60d-473d-acd2-9164717fcd11
sockets: 2
sockets: 1
vmgenid: 682dfeab-8c63-4f0b-8ed2-8828c2f808ef