CLAUDE: Add server-configs version control system

Introduces centralized configuration management for home lab:
- sync-configs.sh script for pull/push/diff/deploy operations
- hosts.yml inventory tracking 9 hosts (Proxmox, VMs, LXCs, cloud)
- Docker Compose files from all active hosts (sanitized)
- Proxmox VM and LXC configurations for backup reference
- .env.example files for services requiring secrets

All hardcoded secrets replaced with ${VAR} references.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Cal Corum 2025-12-11 16:13:28 -06:00
parent b8b4b13130
commit cd614e753a
54 changed files with 2209 additions and 0 deletions

27
server-configs/.gitignore vendored Normal file
View File

@ -0,0 +1,27 @@
# Secrets - NEVER commit these
.env
*.env
!.env.example
# Application data that may accidentally get pulled
data/
logs/
*.log
*.db
*.sqlite
*.sqlite3
cache/
config/metadata/
config/data/
__pycache__/
node_modules/
*.pyc
# Backup files
*.bak
*.backup
*.zip
# OS files
.DS_Store
Thumbs.db

230
server-configs/README.md Normal file
View File

@ -0,0 +1,230 @@
# Home Lab Server Configurations
Version-controlled configuration files for the home lab infrastructure. This system provides a centralized way to track, sync, and deploy Docker Compose files and VM/LXC configurations across multiple hosts.
## Quick Start
```bash
# Check status of all hosts
./sync-configs.sh status
# Pull latest configs from all hosts
./sync-configs.sh pull
# Pull from a specific host
./sync-configs.sh pull ubuntu-manticore
# Show differences between local and remote
./sync-configs.sh diff
# Push configs (no restart)
./sync-configs.sh push
# Deploy and restart a specific service
./sync-configs.sh deploy sba-bots paper-dynasty
```
## Directory Structure
```
server-configs/
├── hosts.yml # Host inventory with connection details
├── sync-configs.sh # Main sync script
├── .gitignore # Prevents secrets from being committed
├── README.md # This file
├── proxmox/ # Proxmox hypervisor configs
│ ├── lxc/ # LXC container configurations
│ └── qemu/ # VM configurations
├── ubuntu-manticore/ # Physical Ubuntu server
│ └── docker-compose/
│ ├── jellyfin/
│ ├── tdarr/
│ └── watchstate/
├── sba-bots/ # SBA Discord bots VM
│ └── docker-compose/
│ ├── paper-dynasty/
│ ├── major-domo/
│ └── ...
├── strat-database/ # Database services VM
│ └── docker-compose/
│ ├── pd-database/
│ ├── sba-database/
│ └── ...
├── arr-stack/ # Media automation LXC
│ └── docker-compose/
│ └── arr-stack/
├── n8n/ # Workflow automation LXC
│ └── docker-compose/
│ └── n8n/
├── akamai/ # Cloud server (Linode)
│ └── docker-compose/
│ ├── nginx-proxy-manager/
│ ├── major-domo/
│ └── ...
└── nobara-desktop/ # Local dev machine
└── docker-compose/
```
## Host Inventory
| Host | Type | IP | Description |
|------|------|-----|-------------|
| proxmox | Proxmox VE | 10.10.0.11 | Main hypervisor (VMs/LXCs) |
| ubuntu-manticore | Docker | 10.10.0.226 | Physical server - media services |
| discord-bots | Docker | 10.10.0.33 | Discord bots and game services |
| sba-bots | Docker | 10.10.0.88 | SBA/Paper Dynasty production |
| strat-database | Docker | 10.10.0.42 | Database services |
| arr-stack | Docker | 10.10.0.221 | Sonarr/Radarr/etc. |
| n8n | Docker | 10.10.0.210 | Workflow automation |
| akamai | Docker | 172.237.147.99 | Public-facing services |
| nobara-desktop | Local | - | Development workstation |
## Commands Reference
### `./sync-configs.sh pull [host]`
Pulls Docker Compose files from remote hosts to the local repository. Only syncs `docker-compose*.yml`, `compose.yml`, and `.env.example` files - not application data.
### `./sync-configs.sh push [host]`
Pushes local configs to remote hosts. Does NOT restart services. Use this to stage changes before deployment.
### `./sync-configs.sh diff [host]`
Shows differences between local repository and remote host configurations.
### `./sync-configs.sh deploy <host> <service>`
Pushes config for a specific service and restarts it. Example:
```bash
./sync-configs.sh deploy sba-bots paper-dynasty
```
### `./sync-configs.sh status`
Shows connectivity status and config count for all hosts.
### `./sync-configs.sh list`
Lists all configured hosts and their services.
## Secrets Management
**Secrets are NOT stored in this repository.**
All sensitive values (tokens, passwords, API keys) are referenced via environment variables (`${VAR_NAME}`) and stored in `.env` files on each host. The `.gitignore` prevents `.env` files from being committed.
For each service that requires secrets, a `.env.example` file is provided showing the required variables:
```bash
# Example .env.example
BOT_TOKEN=your_discord_bot_token_here
API_TOKEN=your_api_token_here
DB_PASSWORD=your_database_password_here
```
When deploying a new service:
1. Copy `.env.example` to `.env` on the target host
2. Fill in the actual secret values
3. Run `docker compose up -d`
## Workflow
### Making Configuration Changes
1. **Pull latest** to ensure you have current configs:
```bash
./sync-configs.sh pull
```
2. **Edit locally** in your preferred editor
3. **Review changes** before pushing:
```bash
./sync-configs.sh diff <host>
```
4. **Push changes** to the host:
```bash
./sync-configs.sh push <host>
```
5. **Deploy** when ready to restart:
```bash
./sync-configs.sh deploy <host> <service>
```
6. **Commit** to git:
```bash
git add -A && git commit -m "Update <service> config"
```
### Adding a New Host
1. Add host entry to `hosts.yml`:
```yaml
new-host:
type: docker
ssh_alias: new-host
ip: 10.10.0.XXX
user: cal
description: "Description here"
config_paths:
docker-compose: /path/to/compose/files
services:
- service1
- service2
```
2. Add SSH alias to `~/.ssh/config` if needed
3. Create directory and pull:
```bash
mkdir -p server-configs/new-host/docker-compose
./sync-configs.sh pull new-host
```
### Adding a New Service
1. Create the service on the host with `docker-compose.yml`
2. Pull the config:
```bash
./sync-configs.sh pull <host>
```
3. Sanitize any hardcoded secrets (replace with `${VAR}` references)
4. Create `.env.example` if secrets are required
5. Commit to git
## Proxmox Configs
Proxmox VM and LXC configurations are pulled for reference and backup purposes. The sync script does NOT push Proxmox configs back automatically due to the sensitive nature of hypervisor configuration.
To restore a VM/LXC config:
1. Review the config file in `proxmox/qemu/` or `proxmox/lxc/`
2. Manually copy to Proxmox if needed
3. Use Proxmox web UI or CLI for actual restoration
## Troubleshooting
### Host shows as offline
- Check if the VM/LXC is running on Proxmox
- Verify SSH connectivity: `ssh <host-alias>`
- Check `hosts.yml` for correct IP/alias
### Changes not taking effect
- Ensure you ran `push` after editing
- Check if service needs restart: `deploy` command
- Verify the remote file was updated: `diff` command
### Permission denied on push
- Check SSH key is loaded: `ssh-add -l`
- Verify user has write access to config directory
- For root-owned paths (arr-stack, n8n), ensure SSH alias uses correct user
## Related Documentation
- [Tdarr Setup](../tdarr/CONTEXT.md) - Media transcoding configuration
- [Networking](../networking/CONTEXT.md) - NPM and network config
- [Monitoring](../monitoring/CONTEXT.md) - System monitoring setup

View File

@ -0,0 +1,4 @@
# Dev Paper Dynasty - Environment Variables
BOT_TOKEN=your_discord_bot_token_here
API_TOKEN=your_api_token_here
DB_PASSWORD=your_database_password_here

View File

@ -0,0 +1,52 @@
services:
discord-app:
image: manticorum67/paper-dynasty-discordapp:dev
restart: unless-stopped
volumes:
- ./dev-storage:/usr/src/app/storage
- ./dev-logs:/usr/src/app/logs
environment:
- PYTHONBUFFERED=0
- GUILD_ID=669356687294988350
- BOT_TOKEN=${BOT_TOKEN}
- LOG_LEVEL=INFO
- API_TOKEN=${API_TOKEN}
- SCOREBOARD_CHANNEL=1000521215703789609
- TZ=America/Chicago
- PYTHONHASHSEED=1749583062
- DATABASE=Dev
- DB_USERNAME=postgres
- DB_PASSWORD=${DB_PASSWORD}
- DB_URL=db
- DB_NAME=postgres
depends_on:
db:
condition: service_healthy
db:
image: postgres
restart: unless-stopped
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pd_postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 1s
timeout: 5s
retries: 5
# start_period: 30s
adminer:
image: adminer
restart: always
ports:
- 8008:8080
networks:
backend:
driver: bridge
volumes:
pd_postgres:

View File

@ -0,0 +1,29 @@
services:
discord-app:
# image: manticorum67/major-domo-discordapp:1.5
image: manticorum67/major-domo-discordapp:latest
restart: unless-stopped
volumes:
# - ./storage:/usr/src/app/storage
# - ./logs:/usr/src/app/logs
- ./logs:/app/logs
- ./storage:/app/data
environment:
- PYTHONBUFFERED=0
- BOT_TOKEN=${BOT_TOKEN}
- GUILD_ID=${GUILD_ID}
- API_TOKEN=${API_TOKEN}
- TESTING=${TESTING}
- LOG_LEVEL=${LOG_LEVEL}
- TZ=${TZ}
- DB_URL=${DB_URL}
- HELP_EDITOR_ROLE_NAME=${HELP_EDITOR_ROLE_NAME}
- ENVIRONMENT=${ENVIRONMENT}
- OFFSEASON_FLAG=${OFFSEASON_FLAG}
networks:
- backend
networks:
backend:
driver: bridge

View File

@ -0,0 +1,30 @@
networks:
npm_network:
driver: bridge
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
networks:
- npm_network
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
environment:
TZ: Americas/Chicago
# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt

View File

@ -0,0 +1,80 @@
services:
db:
image: postgres:17
restart: unless-stopped
environment:
TZ: ${TZ}
POSTGRES_USER: ${PD_DB_USER}
POSTGRES_PASSWORD: ${PD_DB_PASSWORD}
POSTGRES_DB: ${PD_DB_DATABASE}
ports:
- 5432:5432
volumes:
- db_data:/var/lib/postgresql/data
apiproxy:
image: manticorum67/paper-dynasty-apiproxy:dev
restart: unless-stopped
environment:
TZ: ${TZ}
PRODUCTION: False
JWT_SECRET: ${PD_JWT_SECRET}
ports:
- 8000:8000
volumes:
- ./logs:/app/logs
adminer:
image: adminer
restart: always
environment:
TZ: ${TZ}
ports:
- 8088:8080
postgrest:
image: postgrest/postgrest
ports:
- "3000:3000"
environment:
TZ: ${TZ}
PGRST_DB_URI: "postgres://${PD_DB_USER}:${PD_DB_PASSWORD}@db:5432/${PD_DB_DATABASE}"
PGRST_OPENAPI_SERVER_PROXY_URI: ${PD_API_URL}
PGRST_DB_ANON_ROLE: web_anon
PGRST_JWT_SECRET: ${PD_JWT_SECRET}
depends_on:
- db
swagger:
image: swaggerapi/swagger-ui
ports:
- "8080:8080"
expose:
- "8080"
environment:
TZ: ${TZ}
PD_API_URL: ${PD_API_URL}
depends_on:
- db
# pgadmin:
# image: dpage/pgadmin4
# ports:
# - "8082:80"
# environment:
# TZ: ${TZ}
# PGADMIN_DEFAULT_EMAIL: cal.corum@gmail.com
# PGADMIN_DEFAULT_PASSWORD: ${PD_DB_PASSWORD}
# POSTGRES_HOST: postgreshost
# POSTGRES_USER: ${PD_DB_USER}
# POSTGRES_PASSWORD: ${PD_DB_PASSWORD}
# POSTGRES_DB: db
# PGADMIN_CONFIG_SERVER_MODE: False
# PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: False
# volumes:
# - pgadmin-data:/var/lib/pgadmin
volumes:
db_data:
pgadmin-data:

View File

@ -0,0 +1,120 @@
version: '3'
networks:
nginx-proxy-manager_npm_network:
external: true
services:
api:
# build: .
image: manticorum67/major-domo-database:latest
restart: unless-stopped
container_name: sba_db_api
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
ports:
- 801:80
networks:
- default
- nginx-proxy-manager_npm_network
environment:
- TESTING=False
- LOG_LEVEL=${LOG_LEVEL}
- API_TOKEN=${API_TOKEN}
- TZ=${TZ}
- WORKERS_PER_CORE=1.5
- TIMEOUT=120
- GRACEFUL_TIMEOUT=120
- DATABASE_TYPE=postgresql
- POSTGRES_HOST=sba_postgres
- POSTGRES_DB=${SBA_DATABASE}
- POSTGRES_USER=${SBA_DB_USER}
- POSTGRES_PASSWORD=${SBA_DB_USER_PASSWORD}
- REDIS_HOST=sba_redis
- REDIS_PORT=6379
- REDIS_DB=0
- CACHE_ENABLED=False
depends_on:
- postgres
- redis
postgres:
image: postgres:17-alpine
restart: unless-stopped
container_name: sba_postgres
environment:
- POSTGRES_DB=${SBA_DATABASE}
- POSTGRES_USER=${SBA_DB_USER}
- POSTGRES_PASSWORD=${SBA_DB_USER_PASSWORD}
- TZ=${TZ}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./logs:/var/log/postgresql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${SBA_DB_USER} -d ${SBA_DATABASE}"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
redis:
image: redis:7-alpine
restart: unless-stopped
container_name: sba_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
environment:
- TZ=${TZ}
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
command: redis-server --appendonly yes
adminer:
image: adminer:latest
restart: unless-stopped
container_name: sba_adminer
ports:
- "8080:8080"
environment:
- ADMINER_DEFAULT_SERVER=sba_postgres
- TZ=${TZ}
# - ADMINER_DESIGN=pepa-linha-dark
depends_on:
- postgres
sync-prod:
image: alpine:latest
container_name: sba_sync_prod
volumes:
- ./scripts:/scripts
- /home/cal/.ssh:/tmp/ssh:ro
environment:
- SBA_DB_USER=${SBA_DB_USER}
- SBA_DATABASE=${SBA_DATABASE}
- SBA_DB_USER_PASSWORD=${SBA_DB_USER_PASSWORD}
command: >
sh -c "
cp -r /tmp/ssh /root/.ssh &&
chmod 700 /root/.ssh &&
chmod 600 /root/.ssh/* &&
chown -R root:root /root/.ssh &&
/scripts/sync_from_prod.sh
"
profiles: ["sync"]
depends_on:
- postgres
networks:
- default
volumes:
postgres_data:
redis_data:

View File

@ -0,0 +1,22 @@
services:
sba-web:
image: manticorum67/sba-website:${VERSION:-latest}
ports:
- "803:80" # Use internal port since nginx proxy manager handles external routing
restart: unless-stopped
volumes:
- ./public:/usr/share/nginx/html/public
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
environment:
- NODE_ENV=production
networks:
- default
networks:
default:
driver: bridge

View File

@ -0,0 +1,18 @@
services:
database:
image: manticorum67/major-domo-database:latest
restart: unless-stopped
container_name: sba_database
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
ports:
- 801:80
environment:
- TESTING=False
- LOG_LEVEL=INFO
- API_TOKEN=${API_TOKEN}
- TZ=America/Chicago
- WORKERS_PER_CORE=1.5
- TIMEOUT=120
- GRACEFUL_TIMEOUT=120

View File

@ -0,0 +1,19 @@
services:
db:
image: postgres:17
restart: unless-stopped
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: paperdynasty
volumes:
- db_data:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
db_data:

View File

@ -0,0 +1,99 @@
# /opt/arr-stack/docker-compose.yml
# Simplified *arr stack - Usenet only (no VPN needed)
# Deployed: 2025-12-05
services:
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
environment:
- PUID=0
- PGID=0
- TZ=America/Chicago
volumes:
- ./config/sonarr:/config
- /mnt/media:/media
ports:
- 8989:8989
security_opt:
- apparmor=unconfined
restart: unless-stopped
radarr:
image: linuxserver/radarr:latest
container_name: radarr
environment:
- PUID=0
- PGID=0
- TZ=America/Chicago
volumes:
- ./config/radarr:/config
- /mnt/media:/media
ports:
- 7878:7878
security_opt:
- apparmor=unconfined
restart: unless-stopped
readarr:
image: ghcr.io/hotio/readarr:latest
container_name: readarr
environment:
- PUID=0
- PGID=0
- TZ=America/Chicago
volumes:
- ./config/readarr:/config
- /mnt/media:/media
ports:
- 8787:8787
security_opt:
- apparmor=unconfined
restart: unless-stopped
lidarr:
image: linuxserver/lidarr:latest
container_name: lidarr
environment:
- PUID=0
- PGID=0
- TZ=America/Chicago
volumes:
- ./config/lidarr:/config
- /mnt/media:/media
ports:
- 8686:8686
security_opt:
- apparmor=unconfined
restart: unless-stopped
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
environment:
- TZ=America/Chicago
- LOG_LEVEL=debug
volumes:
- ./config/jellyseerr:/app/config
ports:
- 5055:5055
security_opt:
- apparmor=unconfined
restart: unless-stopped
sabnzbd:
image: linuxserver/sabnzbd:latest
container_name: sabnzbd
environment:
- PUID=0
- PGID=0
- TZ=America/Chicago
volumes:
- ./config/sabnzbd:/config
- /mnt/media/downloads:/downloads
- /mnt/media:/media
ports:
- 8080:8080
security_opt:
- apparmor=unconfined
restart: unless-stopped

153
server-configs/hosts.yml Normal file
View File

@ -0,0 +1,153 @@
# Home Lab Host Inventory
# Used by sync-configs.sh for configuration management
# Host Types:
# - proxmox: Proxmox VE hypervisor (LXC/VM configs)
# - docker: Hosts running Docker containers
# - local: Local machine (no SSH needed)
hosts:
# Proxmox Hypervisor
proxmox:
type: proxmox
ssh_alias: proxmox
ip: 10.10.0.11
user: root
description: "Main Proxmox VE hypervisor"
config_paths:
lxc: /etc/pve/nodes/proxmox/lxc
qemu: /etc/pve/nodes/proxmox/qemu-server
# Ubuntu Server (Physical)
ubuntu-manticore:
type: docker
ssh_alias: ubuntu-manticore
ip: 10.10.0.226
user: cal
description: "Physical Ubuntu server - media services"
config_paths:
docker-compose: /home/cal/docker
services:
- jellyfin
- tdarr
- watchstate
# Discord Bots VM (Proxmox)
discord-bots:
type: docker
ssh_alias: discord-bots
ip: 10.10.0.33
user: cal
description: "Discord bots and game services"
config_paths:
docker-compose: /home/cal/container-data
services:
- mln-ghost-ball
- postgres
- home-run-derby
- grizzlies-pa
- foundry
- major-domo
- fallout-dice
- forever-werewolf
- sbadev-database
- mln-central
- sand-trap-scramble
# SBA Bots VM (Proxmox)
sba-bots:
type: docker
ssh_alias: sba-bots
ip: 10.10.0.88
user: cal
description: "SBA/Paper Dynasty production bots"
config_paths:
docker-compose: /home/cal/container-data
services:
- paper-dynasty
- major-domo
- sba-website
- sba-ghost
# Database VM (Proxmox)
strat-database:
type: docker
ssh_alias: strat-database
ip: 10.10.0.42
user: cal
description: "Database services"
config_paths:
docker-compose: /home/cal/container-data
services:
- sba-cards
- pd-database
- postgres-database
- sba-database
- dev-pd-database
- dev-sba-database
# Arr Stack LXC (Proxmox)
arr-stack:
type: docker
ssh_alias: arr-stack
ip: 10.10.0.221
user: root
description: "Media automation stack (Sonarr, Radarr, etc.)"
config_paths:
docker-compose: /opt/arr-stack
services:
- arr-stack
# n8n LXC (Proxmox)
n8n:
type: docker
ssh_alias: n8n
ip: 10.10.0.210
user: root
description: "n8n workflow automation"
config_paths:
docker-compose: /opt/n8n
services:
- n8n
# Local Development Machine
nobara-desktop:
type: local
ip: null
user: cal
description: "Local development workstation"
config_paths:
docker-compose:
- /mnt/NV2/Development/major-domo/discord-app-v2
- /mnt/NV2/Development/strat-gameplay-webapp
services:
- major-domo-dev
- strat-gameplay-webapp
# Akamai Cloud Server
akamai:
type: docker
ssh_alias: akamai
ip: 172.237.147.99
user: root
description: "Akamai Linode - public-facing services"
config_paths:
docker-compose: /root/container-data
services:
- nginx-proxy-manager
- major-domo
- dev-paper-dynasty
- sba-database
- postgres
- sba-website
- sqlite-major-domo
- temp-postgres
# Decommissioned hosts (kept for reference)
# decommissioned:
# tdarr-old:
# ip: 10.10.0.43
# note: "Replaced by ubuntu-manticore tdarr"
# docker-home:
# ip: 10.10.0.124
# note: "Decommissioned"

View File

@ -0,0 +1,75 @@
version: '3.8'
services:
postgres:
image: postgres:15-alpine
container_name: n8n-postgres
restart: unless-stopped
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 5s
timeout: 5s
retries: 10
networks:
- n8n-network
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
ports:
- "5678:5678"
environment:
# Database
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
# n8n Configuration
- N8N_HOST=${N8N_HOST}
- N8N_PORT=5678
- N8N_PROTOCOL=${N8N_PROTOCOL}
- WEBHOOK_URL=${WEBHOOK_URL}
- GENERIC_TIMEZONE=${TIMEZONE}
# Security
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
# Performance
- NODE_ENV=production
- EXECUTIONS_PROCESS=main
- EXECUTIONS_MODE=regular
# Logging
- N8N_LOG_LEVEL=info
- N8N_LOG_OUTPUT=console
volumes:
- n8n_data:/home/node/.n8n
networks:
- n8n-network
volumes:
postgres_data:
name: n8n_postgres_data
n8n_data:
name: n8n_data
networks:
n8n-network:
name: n8n-network
driver: bridge

View File

@ -0,0 +1,12 @@
arch: amd64
cores: 4
features: nesting=1
hostname: ansible
memory: 8096
nameserver: 10.10.0.16 1.1.1.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.0.1,hwaddr=F6:97:C3:23:0D:B2,ip=10.10.0.5/24,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-108-disk-0,size=32G
searchdomain: corumservers.com
swap: 8096
unprivileged: 1

View File

@ -0,0 +1,13 @@
arch: amd64
cores: 4
hostname: docker-n8n-lxc
memory: 8192
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=10.10.0.1,hwaddr=32:67:BE:A4:7F:F4,ip=10.10.0.210/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-210-disk-0,size=128G
swap: 512
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

View File

@ -0,0 +1,13 @@
arch: amd64
cores: 4
features: nesting=1,keyctl=1
hostname: docker-7days-lxc
memory: 32768
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=10.10.0.1,hwaddr=CE:7E:8F:B2:40:C2,ip=10.10.0.250/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-211-disk-0,size=128G
searchdomain: local
swap: 2048
lxc.apparmor.profile: unconfined

View File

@ -0,0 +1,10 @@
arch: amd64
cores: 2
features: nesting=1,keyctl=1
hostname: arr-stack
memory: 4096
net0: name=eth0,bridge=vmbr0,gw=10.10.0.1,hwaddr=5A:28:7D:A8:13:65,ip=10.10.0.221/24,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-221-disk-0,size=32G
swap: 512
lxc.apparmor.profile: unconfined

View File

@ -0,0 +1,13 @@
arch: amd64
cores: 1
features: nesting=1,keyctl=1
hostname: memos
memory: 1024
nameserver: 10.10.0.16
net0: name=eth0,bridge=vmbr0,gw=10.10.0.1,hwaddr=3E:9F:39:6F:B1:92,ip=10.10.0.222/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-222-disk-0,size=8G
searchdomain: local
swap: 512
lxc.apparmor.profile: unconfined

View File

@ -0,0 +1,20 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-20.04.4-desktop-amd64.iso,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1646083628
name: ubuntu-template
net0: virtio=0E:A6:F2:0D:50:69,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:base-100-disk-0,size=288G
scsihw: virtio-scsi-pci
smbios1: uuid=52abe492-8d48-4e3d-bf39-2a8448d6dbec
sockets: 2
template: 1
vmgenid: a9118d77-133a-49ad-af18-8b15420958dc
[PENDING]
boot: order=scsi0;net0
delete: ide2

View File

@ -0,0 +1,14 @@
agent: 1
boot: order=scsi0;net0
cores: 4
memory: 32768
meta: creation-qemu=6.1.0,ctime=1648393429
name: 7d-solo
net0: virtio=6A:40:D4:03:F1:23,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-101-disk-0,size=128G
scsihw: virtio-scsi-pci
smbios1: uuid=2f1c6c7d-8e39-4c44-9309-51c415a0e72f
sockets: 1
vmgenid: 8a9dd631-5875-4336-9113-ce9f3eff9f5f

View File

@ -0,0 +1,14 @@
agent: 1
boot: order=scsi0;net0
cores: 4
memory: 32768
meta: creation-qemu=6.1.0,ctime=1648412615
name: 7d-staci
net0: virtio=8E:F7:FC:6B:6D:DC,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-102-disk-0,size=128G
scsihw: virtio-scsi-pci
smbios1: uuid=f56221fd-246a-49b0-9cdb-a3458475d41e
sockets: 1
vmgenid: c09c50ab-93da-4fd8-8326-322c572c8bc3

View File

@ -0,0 +1,16 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-20.04.4-desktop-amd64.iso,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1646083628
name: docker-template
net0: virtio=8E:B7:FE:6D:D7:52,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:base-103-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=793ca394-3e87-40f2-8823-5d908c5ee564
sockets: 2
template: 1
vmgenid: 854bfe23-67cc-4dd7-a4d0-a93f10787e50

View File

@ -0,0 +1,14 @@
agent: 1
boot: order=scsi0;net0
cores: 4
memory: 32768
meta: creation-qemu=6.1.0,ctime=1648393429
name: 7d-wotw
net0: virtio=C2:B3:F6:FD:84:CF,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-104-disk-0,size=128G
scsihw: virtio-scsi-pci
smbios1: uuid=d4e80082-2aaf-4602-94a5-c256a0121657
sockets: 1
vmgenid: 3db3e3f4-62e0-429f-9322-95d0de797e77

View File

@ -0,0 +1,15 @@
agent: 1
boot: order=scsi0;net0
cores: 8
memory: 16384
meta: creation-qemu=6.1.0,ctime=1646688596
name: docker-vpn
net0: virtio=76:36:85:A7:6A:A3,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-105-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=55061264-b9b1-4ce4-8d44-9c187affcb1d
sockets: 1
vmgenid: 30878bdf-66f9-41bf-be34-c31b400340f9

View File

@ -0,0 +1,15 @@
agent: 1
boot: order=scsi0;net0
cores: 4
memory: 16384
meta: creation-qemu=6.1.0,ctime=1646083628
name: docker-home
net0: virtio=BA:65:DF:88:85:4C,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-106-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=54ef12fc-edcc-4744-a109-dd2de9a6dc03
sockets: 2
vmgenid: a13c92a2-a955-485e-a80e-391e99b19fbd

View File

@ -0,0 +1,15 @@
agent: 1
boot: order=scsi0;net0
cores: 8
memory: 16384
meta: creation-qemu=6.1.0,ctime=1646716190
name: plex
net0: virtio=D2:82:86:25:B9:FC,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-107-disk-0,size=128G
scsihw: virtio-scsi-pci
smbios1: uuid=e4a21847-39a7-4fb8-b9db-2fb2cf46077f
sockets: 2
vmgenid: ca0840de-bcf2-4c38-bcad-fcb883cd6b1e

View File

@ -0,0 +1,17 @@
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
efidisk0: local-lvm:vm-109-disk-1,efitype=4m,size=4M
ide2: none,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1647545631
name: hass-io
net0: virtio=D6:0D:3B:EC:7D:44,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-109-disk-0,size=32G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=06361ef8-7954-4211-98cd-061ff047c401
sockets: 1
vmgenid: 62003864-950d-4e6f-a959-67caf08d3d4a

View File

@ -0,0 +1,16 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-20.04.4-desktop-amd64.iso,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1646083628
name: discord-bots
net0: virtio=DA:1E:55:E6:64:8B,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-110-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=b3384b2f-d395-44de-986d-6a4d938ddbea
sockets: 2
vmgenid: acd1ecab-d817-4880-9341-2ca0afd46230

View File

@ -0,0 +1,15 @@
agent: 1
boot: order=scsi0;net0
cores: 4
memory: 32768
meta: creation-qemu=6.1.0,ctime=1667162709
name: docker-7days
net0: virtio=36:90:52:0E:42:B0,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: home-truenas:111/vm-111-disk-0.qcow2,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=9695bb1d-f840-44c7-8b6e-af03c7b559d7
sockets: 1
vmgenid: 337b527a-cbaa-44a5-ad69-3035853c001b

View File

@ -0,0 +1,16 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: none,media=cdrom
memory: 16384
meta: creation-qemu=6.1.0,ctime=1694032461
name: databases-bots
net0: virtio=D6:98:76:F1:AD:70,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-112-disk-0,size=512G
scsihw: virtio-scsi-pci
smbios1: uuid=ee8889c4-2cac-4704-afb5-f98cde4efb90
sockets: 4
vmgenid: 577cf49c-7722-437c-83ee-b1d6f12ce3ee

View File

@ -0,0 +1,16 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 4
ide2: local:iso/ubuntu-20.04.4-desktop-amd64.iso,media=cdrom
memory: 16384
meta: creation-qemu=6.1.0,ctime=1646083628
name: docker-tdarr
net0: virtio=22:ED:56:82:D6:A7,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-113-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=2594a1a4-20ad-4861-99d5-4d6c58f69c46
sockets: 2
vmgenid: 59ad1317-11f0-42ab-9757-4a70a0ede22c

View File

@ -0,0 +1,15 @@
agent: 1
boot: order=scsi0;net0
cores: 2
memory: 8192
meta: creation-qemu=6.1.0,ctime=1646083628
name: docker-pittsburgh
net0: virtio=5E:6D:4A:6C:D1:EE,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-114-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=8b24ecc0-ac58-441e-aa69-a3957073c7dc
sockets: 2
vmgenid: 89072f53-2238-4a53-8b20-ed7d1f47ac71

View File

@ -0,0 +1,16 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 8
ide2: local:iso/ubuntu-20.04.4-desktop-amd64.iso,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1646083628
name: docker-sba
net0: virtio=2E:17:9E:D6:62:0E,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-115-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=19be98ee-f60d-473d-acd2-9164717fcd11
sockets: 2
vmgenid: 682dfeab-8c63-4f0b-8ed2-8828c2f808ef

View File

@ -0,0 +1,16 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-20.04.4-desktop-amd64.iso,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1646083628
name: docker-home-servers
net0: virtio=16:0C:5A:5E:D6:5D,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-116-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=e7ab7046-504b-462e-b3b8-5c12825a1407
sockets: 2
vmgenid: 9171ab63-7961-4b9d-9bd4-4e036519ecdb

View File

@ -0,0 +1,15 @@
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-20.04.4-desktop-amd64.iso,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1646083628
name: docker-unused
net0: virtio=D6:88:7E:E5:17:01,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-117-disk-0,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=51c1d801-a2c8-416c-a7f9-a47fcc5ca783
sockets: 2
vmgenid: 2110750f-9078-42d7-9360-70e0a642cbed

View File

@ -0,0 +1,3 @@
# Major Domo Discord Bot - Environment Variables
BOT_TOKEN=your_discord_bot_token_here
API_TOKEN=your_api_token_here

View File

@ -0,0 +1,22 @@
services:
discord-app:
# build: ./discord-app
image: manticorum67/major-domo-discordapp:dev
restart: unless-stopped
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
networks:
- backend
environment:
- PYTHONBUFFERED=0
- BOT_TOKEN=${BOT_TOKEN}
- GUILD_ID=613880856032968834
- API_TOKEN=${API_TOKEN}
- TESTING=False
- LOG_LEVEL=INFO
- TZ=America/Chicago
networks:
backend:
driver: bridge

View File

@ -0,0 +1,4 @@
# Paper Dynasty Discord Bot - Environment Variables
BOT_TOKEN=your_discord_bot_token_here
API_TOKEN=your_api_token_here
DB_PASSWORD=your_database_password_here

View File

@ -0,0 +1,68 @@
version: '3'
services:
discord-app:
image: manticorum67/paper-dynasty-discordapp:latest
restart: unless-stopped
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
environment:
- PYTHONBUFFERED=0
- GUILD_ID=613880856032968834
- BOT_TOKEN=${BOT_TOKEN}
- LOG_LEVEL=INFO
- API_TOKEN=${API_TOKEN}
- SCOREBOARD_CHANNEL=1000623557191155804
- TZ=America/Chicago
- PYTHONHASHSEED=1749583062
- DATABASE=Prod
- DB_USERNAME=postgres
- DB_PASSWORD=${DB_PASSWORD}
- DB_URL=db
- DB_NAME=postgres
networks:
- backend
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "python3 -c 'import sys; sys.exit(0)' || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
db:
image: postgres:18
restart: unless-stopped
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pd_postgres:/var/lib/postgresql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 1s
timeout: 5s
retries: 5
# start_period: 30s
networks:
- backend
adminer:
image: adminer
restart: always
ports:
- 8080:8080
networks:
- backend
networks:
backend:
driver: bridge
paper_dynasty:
driver: bridge
volumes:
pd_postgres:

View File

@ -0,0 +1,16 @@
version: '3'
services:
sba-ghost:
image: ghost:latest
restart: unless-stopped
ports:
- 2368:2368
volumes:
- ./ghost:/var/lib/ghost/content
environment:
database__client: sqlite3
database__connection__filename: /var/lib/ghost/content/data/ghost.db
url: https://sbanews.manticorum.com
NODE_ENV: production
TZ: Americas/Chicago

View File

@ -0,0 +1,24 @@
version: '3.8'
services:
sba-web:
image: manticorum67/sba-website:${VERSION:-latest}
ports:
- "803:80" # Use internal port since nginx proxy manager handles external routing
restart: unless-stopped
volumes:
- ./public:/usr/share/nginx/html/public
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
environment:
- NODE_ENV=production
networks:
- default
networks:
default:
driver: bridge

View File

@ -0,0 +1,24 @@
version: '3.8'
services:
sba-web:
image: manticorum67/sba-website:${VERSION:-latest}
ports:
- "803:80" # Use internal port since nginx proxy manager handles external routing
restart: unless-stopped
volumes:
- ./public:/usr/share/nginx/html/public
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
environment:
- NODE_ENV=production
networks:
- default
networks:
default:
driver: bridge

View File

@ -0,0 +1,20 @@
version: '3'
services:
database:
# build: ./database-v2
image: manticorum67/paper-dynasty-database:dev
restart: unless-stopped
container_name: dev_pd_database
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
ports:
- 813:80
environment:
- TESTING=True
- LOG_LEVEL=INFO
- API_TOKEN=${API_TOKEN}
- TZ=America/Chicago
- WORKERS_PER_CORE=1.0
- PRIVATE_IN_SCHEMA=TRUE

View File

@ -0,0 +1,119 @@
version: '3'
#networks:
# nginx-proxy-manager_npm_network:
# external: true
services:
api:
# build: .
image: manticorum67/major-domo-database:dev
restart: unless-stopped
container_name: sba_db_api
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
ports:
- 814:80
networks:
- default
# - nginx-proxy-manager_npm_network
environment:
- TESTING=False
- LOG_LEVEL=${LOG_LEVEL}
- API_TOKEN=${API_TOKEN}
- TZ=${TZ}
- WORKERS_PER_CORE=1.5
- TIMEOUT=120
- GRACEFUL_TIMEOUT=120
- DATABASE_TYPE=postgresql
- POSTGRES_HOST=sba_postgres
- POSTGRES_DB=${SBA_DATABASE}
- POSTGRES_USER=${SBA_DB_USER}
- POSTGRES_PASSWORD=${SBA_DB_USER_PASSWORD}
- REDIS_HOST=sba_redis
- REDIS_PORT=6379
- REDIS_DB=0
depends_on:
- postgres
- redis
postgres:
image: postgres:17-alpine
restart: unless-stopped
container_name: sba_postgres
environment:
- POSTGRES_DB=${SBA_DATABASE}
- POSTGRES_USER=${SBA_DB_USER}
- POSTGRES_PASSWORD=${SBA_DB_USER_PASSWORD}
- TZ=${TZ}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./logs:/var/log/postgresql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${SBA_DB_USER} -d ${SBA_DATABASE}"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
redis:
image: redis:7-alpine
restart: unless-stopped
container_name: sba_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
environment:
- TZ=${TZ}
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
command: redis-server --appendonly yes
adminer:
image: adminer:latest
restart: unless-stopped
container_name: sba_adminer
ports:
- "8080:8080"
environment:
- ADMINER_DEFAULT_SERVER=sba_postgres
- TZ=${TZ}
# - ADMINER_DESIGN=pepa-linha-dark
depends_on:
- postgres
sync-prod:
image: alpine:latest
container_name: sba_sync_prod
volumes:
- ./scripts:/scripts
- /home/cal/.ssh:/tmp/ssh:ro
environment:
- SBA_DB_USER=${SBA_DB_USER}
- SBA_DATABASE=${SBA_DATABASE}
- SBA_DB_USER_PASSWORD=${SBA_DB_USER_PASSWORD}
command: >
sh -c "
cp -r /tmp/ssh /root/.ssh &&
chmod 700 /root/.ssh &&
chmod 600 /root/.ssh/* &&
chown -R root:root /root/.ssh &&
/scripts/sync_from_prod.sh
"
profiles: ["sync"]
depends_on:
- postgres
networks:
- default
volumes:
postgres_data:
redis_data:

View File

@ -0,0 +1,38 @@
version: '3'
services:
# database:
# build: ./database
# restart: unless-stopped
# container_name: pd_database
# volumes:
# - ./storage:/usr/src/app/storage
# - ./logs:/usr/src/app/logs
# ports:
# - 811:80
# environment:
# - TESTING=False
# - LOG_LEVEL=INFO
# - API_TOKEN=${API_TOKEN}
# - TZ=America/Chicago
# - WORKERS_PER_CORE=1.5
# # - PYTHONHASHSEED=1749583062
database-v2:
# build: ./database-v2
image: manticorum67/paper-dynasty-database:1.5
restart: unless-stopped
container_name: pd_database_v2
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
ports:
- 815:80
environment:
- TESTING=False
- LOG_LEVEL=INFO
- API_TOKEN=${API_TOKEN}
- TZ=America/Chicago
- WORKERS_PER_CORE=1.5
- WORKER_TIMEOUT=180
- TIMEOUT=180

View File

@ -0,0 +1,2 @@
# PostgreSQL Database - Environment Variables
POSTGRES_PASSWORD=your_postgres_password_here

View File

@ -0,0 +1,19 @@
version: '3.9'
services:
database:
image: postgres
restart: unless-stopped
shm_size: 128mb
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- 5432:5432
volumes:
- /home/cal/postgres-data:/var/lib/postgresql/data
adminer:
image: adminer
restart: unless-stopped
ports:
- 8080:8080

View File

@ -0,0 +1,11 @@
version: '3'
services:
sba-cards:
image: lipanski/docker-static-website:latest
restart: unless-stopped
ports:
- "804:3000"
volumes:
- ./cards:/home/static/cards
- ./images:/home/static/images
- ./httpd.conf:/home/static/httpd.conf:ro

View File

@ -0,0 +1,21 @@
version: '3'
services:
database:
# build: ./database
image: manticorum67/major-domo-database:latest
restart: unless-stopped
container_name: sba_database
volumes:
- ./storage:/usr/src/app/storage
- ./logs:/usr/src/app/logs
ports:
- 801:80
environment:
- TESTING=False
- LOG_LEVEL=INFO
- API_TOKEN=${API_TOKEN}
- TZ=America/Chicago
- WORKERS_PER_CORE=1.5
- TIMEOUT=120
- GRACEFUL_TIMEOUT=120

451
server-configs/sync-configs.sh Executable file
View File

@ -0,0 +1,451 @@
#!/bin/bash
# Home Lab Configuration Sync Script
# Syncs Docker Compose and VM configs between local git repo and remote hosts
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
HOSTS_FILE="$SCRIPT_DIR/hosts.yml"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Helper functions
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[OK]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
usage() {
cat << EOF
Usage: $(basename "$0") <command> [host] [service]
Commands:
pull [host] Pull configs from remote hosts to local repo
push [host] Push configs from local repo to remote hosts (no restart)
diff [host] Show differences between local and remote configs
deploy <host> <service> Push config and restart specific service
status Show sync status for all hosts
list List all configured hosts and services
Examples:
$(basename "$0") pull # Pull from all hosts
$(basename "$0") pull ubuntu-manticore # Pull from specific host
$(basename "$0") diff discord-bots # Show diffs for host
$(basename "$0") deploy sba-bots paper-dynasty # Deploy and restart service
EOF
exit 1
}
# Parse hosts.yml using simple bash (no yq dependency)
get_hosts() {
grep -E "^ [a-z]" "$HOSTS_FILE" | grep -v "^ #" | sed 's/://g' | awk '{print $1}'
}
get_host_property() {
local host="$1"
local property="$2"
# Simple YAML parsing - works for our flat structure
sed -n "/^ $host:/,/^ [a-z]/p" "$HOSTS_FILE" | grep " $property:" | head -1 | sed 's/.*: *//' | tr -d '"'
}
get_host_type() {
get_host_property "$1" "type"
}
get_ssh_alias() {
get_host_property "$1" "ssh_alias"
}
get_docker_path() {
local host="$1"
sed -n "/^ $host:/,/^ [a-z]/p" "$HOSTS_FILE" | grep "docker-compose:" | head -1 | sed 's/.*: *//' | tr -d '"'
}
# Check if host is reachable
check_host() {
local host="$1"
local host_type
host_type=$(get_host_type "$host")
if [[ "$host_type" == "local" ]]; then
return 0
fi
local ssh_alias
ssh_alias=$(get_ssh_alias "$host")
if ssh -o ConnectTimeout=3 -o BatchMode=yes "$ssh_alias" "echo ok" &>/dev/null; then
return 0
else
return 1
fi
}
# Pull configs from a single host
pull_host() {
local host="$1"
local host_type
host_type=$(get_host_type "$host")
local local_dir="$SCRIPT_DIR/$host"
log_info "Pulling configs from $host..."
if [[ "$host_type" == "local" ]]; then
log_warn "Skipping local host $host (no pull needed)"
return 0
fi
if ! check_host "$host"; then
log_error "Cannot connect to $host"
return 1
fi
local ssh_alias
ssh_alias=$(get_ssh_alias "$host")
if [[ "$host_type" == "proxmox" ]]; then
# Pull LXC configs
mkdir -p "$local_dir/lxc"
log_info " Pulling LXC configs..."
rsync -av --delete "$ssh_alias:/etc/pve/nodes/proxmox/lxc/" "$local_dir/lxc/" 2>/dev/null || true
# Pull QEMU configs
mkdir -p "$local_dir/qemu"
log_info " Pulling QEMU/VM configs..."
rsync -av --delete "$ssh_alias:/etc/pve/nodes/proxmox/qemu-server/" "$local_dir/qemu/" 2>/dev/null || true
elif [[ "$host_type" == "docker" ]]; then
local remote_path
remote_path=$(get_docker_path "$host")
if [[ -z "$remote_path" ]]; then
log_warn " No docker-compose path configured for $host"
return 0
fi
mkdir -p "$local_dir/docker-compose"
log_info " Pulling Docker Compose configs from $remote_path..."
# Find and sync ONLY docker-compose files (not application data)
ssh "$ssh_alias" "find $remote_path -maxdepth 2 \( -name 'docker-compose*.yml' -o -name 'compose.yml' \) 2>/dev/null" | while read -r compose_file; do
local service_dir
service_dir=$(dirname "$compose_file")
local service_name
service_name=$(basename "$service_dir")
mkdir -p "$local_dir/docker-compose/$service_name"
# ONLY sync compose files and .env.example - nothing else!
# Explicitly copy just the files we want, not directories
scp "$ssh_alias:$service_dir/docker-compose*.yml" "$local_dir/docker-compose/$service_name/" 2>/dev/null || true
scp "$ssh_alias:$service_dir/compose.yml" "$local_dir/docker-compose/$service_name/" 2>/dev/null || true
scp "$ssh_alias:$service_dir/.env.example" "$local_dir/docker-compose/$service_name/" 2>/dev/null || true
done
fi
log_success "Pulled configs from $host"
}
# Push configs to a single host
push_host() {
local host="$1"
local host_type
host_type=$(get_host_type "$host")
local local_dir="$SCRIPT_DIR/$host"
log_info "Pushing configs to $host..."
if [[ "$host_type" == "local" ]]; then
log_warn "Skipping local host $host (no push needed)"
return 0
fi
if [[ ! -d "$local_dir" ]]; then
log_error "No local configs found for $host"
return 1
fi
if ! check_host "$host"; then
log_error "Cannot connect to $host"
return 1
fi
local ssh_alias
ssh_alias=$(get_ssh_alias "$host")
if [[ "$host_type" == "proxmox" ]]; then
log_warn "Pushing Proxmox configs requires manual review - skipping for safety"
log_info " Use 'diff $host' to review changes first"
return 0
elif [[ "$host_type" == "docker" ]]; then
local remote_path
remote_path=$(get_docker_path "$host")
if [[ ! -d "$local_dir/docker-compose" ]]; then
log_warn " No docker-compose configs to push for $host"
return 0
fi
for service_dir in "$local_dir/docker-compose"/*/; do
local service_name
service_name=$(basename "$service_dir")
local remote_service_path="$remote_path/$service_name"
log_info " Pushing $service_name..."
rsync -av --dry-run \
--include='docker-compose*.yml' \
--include='compose.yml' \
--include='*.conf' \
--exclude='*' \
"$service_dir" "$ssh_alias:$remote_service_path/" 2>/dev/null
# Actual push (remove --dry-run for real push)
rsync -av \
--include='docker-compose*.yml' \
--include='compose.yml' \
--include='*.conf' \
--exclude='*' \
"$service_dir" "$ssh_alias:$remote_service_path/" 2>/dev/null || true
done
fi
log_success "Pushed configs to $host (services NOT restarted)"
}
# Show diff between local and remote
diff_host() {
local host="$1"
local host_type
host_type=$(get_host_type "$host")
local local_dir="$SCRIPT_DIR/$host"
log_info "Comparing configs for $host..."
if [[ "$host_type" == "local" ]]; then
log_warn "Skipping local host $host"
return 0
fi
if ! check_host "$host"; then
log_error "Cannot connect to $host"
return 1
fi
local ssh_alias
ssh_alias=$(get_ssh_alias "$host")
local temp_dir
temp_dir=$(mktemp -d)
# Pull current remote state to temp
if [[ "$host_type" == "proxmox" ]]; then
mkdir -p "$temp_dir/lxc" "$temp_dir/qemu"
rsync -a "$ssh_alias:/etc/pve/nodes/proxmox/lxc/" "$temp_dir/lxc/" 2>/dev/null || true
rsync -a "$ssh_alias:/etc/pve/nodes/proxmox/qemu-server/" "$temp_dir/qemu/" 2>/dev/null || true
echo ""
echo "=== LXC Config Differences ==="
diff -rq "$local_dir/lxc" "$temp_dir/lxc" 2>/dev/null || true
echo ""
echo "=== QEMU Config Differences ==="
diff -rq "$local_dir/qemu" "$temp_dir/qemu" 2>/dev/null || true
elif [[ "$host_type" == "docker" ]]; then
local remote_path
remote_path=$(get_docker_path "$host")
for service_dir in "$local_dir/docker-compose"/*/; do
local service_name
service_name=$(basename "$service_dir")
local remote_service_path="$remote_path/$service_name"
mkdir -p "$temp_dir/$service_name"
rsync -a \
--include='docker-compose*.yml' \
--include='compose.yml' \
--exclude='*' \
"$ssh_alias:$remote_service_path/" "$temp_dir/$service_name/" 2>/dev/null || true
echo ""
echo "=== $service_name ==="
diff -u "$temp_dir/$service_name/docker-compose.yml" "$service_dir/docker-compose.yml" 2>/dev/null || echo "(no differences or file missing)"
done
fi
rm -rf "$temp_dir"
}
# Deploy a specific service (push + restart)
deploy_service() {
local host="$1"
local service="$2"
local host_type
host_type=$(get_host_type "$host")
if [[ "$host_type" != "docker" ]]; then
log_error "Deploy only works for docker hosts"
return 1
fi
local ssh_alias
ssh_alias=$(get_ssh_alias "$host")
local remote_path
remote_path=$(get_docker_path "$host")
local local_dir="$SCRIPT_DIR/$host/docker-compose/$service"
local remote_service_path="$remote_path/$service"
if [[ ! -d "$local_dir" ]]; then
log_error "No local config found for $service on $host"
return 1
fi
if ! check_host "$host"; then
log_error "Cannot connect to $host"
return 1
fi
log_info "Deploying $service to $host..."
# Push the config
rsync -av \
--include='docker-compose*.yml' \
--include='compose.yml' \
--include='*.conf' \
--exclude='*' \
"$local_dir/" "$ssh_alias:$remote_service_path/"
# Restart the service
log_info "Restarting $service..."
ssh "$ssh_alias" "cd $remote_service_path && docker compose down && docker compose up -d"
log_success "Deployed and restarted $service on $host"
}
# Show status of all hosts
show_status() {
echo ""
echo "Home Lab Configuration Status"
echo "=============================="
echo ""
for host in $(get_hosts); do
local host_type
host_type=$(get_host_type "$host")
local status_icon
if [[ "$host_type" == "local" ]]; then
status_icon="${GREEN}${NC}"
status_text="local"
elif check_host "$host"; then
status_icon="${GREEN}${NC}"
status_text="online"
else
status_icon="${RED}${NC}"
status_text="offline"
fi
local local_dir="$SCRIPT_DIR/$host"
local config_count=0
if [[ -d "$local_dir" ]]; then
config_count=$(find "$local_dir" -name "*.yml" -o -name "*.conf" 2>/dev/null | wc -l)
fi
printf " %b %-20s %-10s %-8s %d configs tracked\n" "$status_icon" "$host" "($host_type)" "$status_text" "$config_count"
done
echo ""
}
# List all hosts and services
list_hosts() {
echo ""
echo "Configured Hosts and Services"
echo "=============================="
for host in $(get_hosts); do
local host_type
host_type=$(get_host_type "$host")
local description
description=$(get_host_property "$host" "description")
echo ""
echo -e "${BLUE}$host${NC} ($host_type)"
echo " $description"
local local_dir="$SCRIPT_DIR/$host/docker-compose"
if [[ -d "$local_dir" ]]; then
echo " Services:"
for service in "$local_dir"/*/; do
if [[ -d "$service" ]]; then
echo " - $(basename "$service")"
fi
done
fi
done
echo ""
}
# Main command dispatcher
main() {
if [[ $# -lt 1 ]]; then
usage
fi
local command="$1"
shift
case "$command" in
pull)
if [[ $# -ge 1 ]]; then
pull_host "$1"
else
for host in $(get_hosts); do
pull_host "$host" || true
done
fi
;;
push)
if [[ $# -ge 1 ]]; then
push_host "$1"
else
for host in $(get_hosts); do
push_host "$host" || true
done
fi
;;
diff)
if [[ $# -ge 1 ]]; then
diff_host "$1"
else
for host in $(get_hosts); do
diff_host "$host" || true
done
fi
;;
deploy)
if [[ $# -lt 2 ]]; then
log_error "Deploy requires host and service arguments"
usage
fi
deploy_service "$1" "$2"
;;
status)
show_status
;;
list)
list_hosts
;;
*)
log_error "Unknown command: $command"
usage
;;
esac
}
main "$@"

View File

@ -0,0 +1,25 @@
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
- NVIDIA_DRIVER_CAPABILITIES=all
- NVIDIA_VISIBLE_DEVICES=all
ports:
- "8096:8096" # Web UI
- "7359:7359/udp" # Client discovery
volumes:
- ./config:/config
- /mnt/NV2/jellyfin-cache:/cache
- /mnt/truenas/media:/media:ro
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

View File

@ -0,0 +1,46 @@
version: "3.8"
services:
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
container_name: tdarr-server
restart: unless-stopped
ports:
- "8265:8265" # Web UI
- "8266:8266" # Server port (for nodes)
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
- serverIP=0.0.0.0
- serverPort=8266
- webUIPort=8265
volumes:
- ./server-data:/app/server
- ./configs:/app/configs
- ./logs:/app/logs
- /mnt/truenas/media:/media
tdarr-node:
image: ghcr.io/haveagitgat/tdarr_node:latest
container_name: tdarr-node
restart: unless-stopped
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
- serverIP=tdarr
- serverPort=8266
- nodeName=manticore-gpu
volumes:
- ./node-data:/app/configs
- /mnt/truenas/media:/media
- /mnt/NV2/tdarr-cache:/temp
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
depends_on:
- tdarr

View File

@ -0,0 +1,12 @@
services:
watchstate:
image: ghcr.io/arabcoders/watchstate:latest
container_name: watchstate
restart: unless-stopped
user: "1000:1000"
ports:
- "8080:8080"
environment:
- TZ=America/Chicago
volumes:
- ./data:/config:rw