Complete restructure from patterns/examples/reference to technology-focused directories: • Created technology-specific directories with comprehensive documentation: - /tdarr/ - Transcoding automation with gaming-aware scheduling - /docker/ - Container management with GPU acceleration patterns - /vm-management/ - Virtual machine automation and cloud-init - /networking/ - SSH infrastructure, reverse proxy, and security - /monitoring/ - System health checks and Discord notifications - /databases/ - Database patterns and troubleshooting - /development/ - Programming language patterns (bash, nodejs, python, vuejs) • Enhanced CLAUDE.md with intelligent context loading: - Technology-first loading rules for automatic context provision - Troubleshooting keyword triggers for emergency scenarios - Documentation maintenance protocols with automated reminders - Context window management for optimal documentation updates • Preserved valuable content from .claude/tmp/: - SSH security improvements and server inventory - Tdarr CIFS troubleshooting and Docker iptables solutions - Operational scripts with proper technology classification • Benefits achieved: - Self-contained technology directories with complete context - Automatic loading of relevant documentation based on keywords - Emergency-ready troubleshooting with comprehensive guides - Scalable structure for future technology additions - Eliminated context bloat through targeted loading 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
6.5 KiB
Tdarr Scripts - Operational Context
Script Overview
This directory contains active operational scripts for Tdarr transcoding automation, gaming-aware scheduling, and system management.
Core Scripts
Gaming-Aware Scheduler
Primary Script: tdarr-schedule-manager.sh
Purpose: Comprehensive management interface for gaming-aware Tdarr scheduling
Key Functions:
- Preset Management: Quick schedule templates (night-only, work-safe, weekend-heavy, gaming-only)
- Installation: Automated cron job setup and configuration
- Status Monitoring: Real-time status and logging
- Configuration: Interactive schedule editing and validation
Usage Patterns:
# Quick setup
./tdarr-schedule-manager.sh preset work-safe
./tdarr-schedule-manager.sh install
# Monitoring
./tdarr-schedule-manager.sh status
./tdarr-schedule-manager.sh logs
# Testing
./tdarr-schedule-manager.sh test
Container Management
Start Script: start-tdarr-gpu-podman-clean.sh
Purpose: Launch unmapped Tdarr node with optimized configuration
Key Features:
- Unmapped Node Configuration: Local cache for optimal performance
- GPU Support: Full NVIDIA device passthrough
- Resource Optimization: Direct NVMe cache mapping
- Clean Architecture: No media volume dependencies
Stop Script: stop-tdarr-gpu-podman.sh
Purpose: Graceful container shutdown with cleanup
Scheduling Engine
Core Engine: tdarr-cron-check-configurable.sh
Purpose: Minute-by-minute decision engine for Tdarr state management
Decision Logic:
- Gaming Detection: Check for active gaming processes
- GPU Monitoring: Verify GPU usage below threshold (15%)
- Time Window Validation: Ensure current time within allowed schedule
- State Management: Start/stop Tdarr based on conditions
Gaming Process Detection:
- Steam, Lutris, Heroic Games Launcher
- Wine, Bottles (Windows compatibility layers)
- GameMode, MangoHUD (gaming utilities)
- GPU usage monitoring via nvidia-smi
Configuration Management
Config File: tdarr-schedule.conf
Purpose: Centralized configuration for scheduler behavior
Configuration Structure:
# Time blocks: "HOUR_START-HOUR_END:DAYS"
SCHEDULE_BLOCKS="22-07:daily 09-17:1-5"
# Gaming detection settings
GPU_THRESHOLD=15
GAMING_PROCESSES="steam lutris heroic wine bottles gamemode mangohud"
# Operational settings
LOG_FILE="/tmp/tdarr-scheduler.log"
CONTAINER_NAME="tdarr-node-gpu"
Operational Patterns
Automated Maintenance
Cron Integration: Two automated systems running simultaneously
- Scheduler (every minute):
tdarr-cron-check-configurable.sh - Cleanup (every 6 hours): Temporary directory maintenance
Cleanup Automation:
# Removes abandoned transcoding directories
0 */6 * * * find /tmp -name "tdarr-workDir2-*" -type d -mmin +360 -exec rm -rf {} \; 2>/dev/null || true
Logging Strategy
Log Location: /tmp/tdarr-scheduler.log
Log Format: Timestamped entries with decision reasoning
Log Rotation: Manual cleanup, focused on recent activity
Log Examples:
[2025-08-13 14:30:01] Gaming detected (steam), stopping Tdarr
[2025-08-13 14:35:01] Gaming ended, but outside allowed hours (14:35 not in 22-07:daily)
[2025-08-13 22:00:01] Starting Tdarr (no gaming, within schedule)
System Integration
Gaming Detection: Real-time process monitoring GPU Monitoring: nvidia-smi integration for usage thresholds Container Management: Podman-based lifecycle management Cron Integration: Standard system scheduler for automation
Configuration Presets
Preset Profiles
night-only: "22-07:daily" - Overnight transcoding only
work-safe: "22-07:daily 09-17:1-5" - Nights + work hours
weekend-heavy: "22-07:daily 09-17:1-5 08-20:6-7" - Maximum time
gaming-only: No time limits, gaming detection only
Schedule Format Specification
Format: "HOUR_START-HOUR_END:DAYS"
Examples:
"22-07:daily"- 10PM to 7AM every day (overnight)"09-17:1-5"- 9AM to 5PM Monday-Friday"14-16:6,7"- 2PM to 4PM Saturday and Sunday"08-20:6-7"- 8AM to 8PM weekends only
Container Architecture
Unmapped Node Configuration
Architecture Choice: Local cache with API-based transfers Benefits: 3-5x performance improvement, reduced network dependency
Container Environment:
-e nodeType=unmapped
-e unmappedNodeCache=/cache
-e enableGpu=true
-e TZ=America/New_York
Volume Configuration:
# Local high-speed cache (NVMe)
-v "/mnt/NV2/tdarr-cache:/cache"
# Configuration persistence
-v "/mnt/NV2/tdarr-cache-clean:/temp"
# No media volumes (unmapped mode uses API)
Resource Management
GPU Access: Full NVIDIA device passthrough Memory: Controlled by container limits CPU: Shared with host system Storage: Local NVMe for optimal I/O performance
Troubleshooting Context
Common Issues
- Gaming Not Detected: Check process names in configuration
- Time Window Issues: Verify schedule block format
- Container Start Failures: Check GPU device access
- Log File Growth: Manual cleanup of scheduler logs
Diagnostic Commands
# Test current conditions
./tdarr-schedule-manager.sh test
# View real-time logs
./tdarr-schedule-manager.sh logs
# Check container status
podman ps | grep tdarr
# Verify GPU access
podman exec tdarr-node-gpu nvidia-smi
Recovery Procedures
# Reset to defaults
./tdarr-schedule-manager.sh preset work-safe
# Reinstall scheduler
./tdarr-schedule-manager.sh install
# Manual container restart
./stop-tdarr-gpu-podman.sh
./start-tdarr-gpu-podman-clean.sh
Integration Points
External Dependencies
- Podman: Container runtime for node management
- nvidia-smi: GPU monitoring and device access
- cron: System scheduler for automation
- SSH: Remote server access (monitoring scripts)
File System Dependencies
- Cache Directory:
/mnt/NV2/tdarr-cache(local NVMe) - Temp Directory:
/mnt/NV2/tdarr-cache-clean(processing space) - Log Files:
/tmp/tdarr-scheduler.log(operational logs) - Configuration: Local
tdarr-schedule.conffile
Network Dependencies
- Tdarr Server: API communication for unmapped node operation
- Discord Webhooks: Optional notification integration (via monitoring)
- NAS Access: For final file storage (post-processing only)
This operational context provides comprehensive guidance for managing active Tdarr automation scripts in production environments.