claude-home/tdarr/scripts/CONTEXT.md
Cal Corum 4b7eca8a46
All checks were successful
Reindex Knowledge Base / reindex (push) Successful in 3s
docs: add YAML frontmatter to all 151 markdown files
Adds title, description, type, domain, and tags frontmatter to every
doc for improved KB semantic search. The description field is prepended
to every search chunk, and domain/type/tags enable filtered queries.

Type values: context, guide, runbook, reference, troubleshooting
Domain values match directory structure (networking, docker, etc.)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 09:00:44 -05:00

6.8 KiB

title description type domain tags
Tdarr Scripts Operational Context Overview of Tdarr automation scripts: gaming-aware scheduler, container management (Podman), cron-based scheduling engine, configuration presets, and unmapped node architecture with local NVMe cache. context tdarr
tdarr
scripts
podman
cron
gaming-detection
scheduler
nvidia
automation

Tdarr Scripts - Operational Context

Script Overview

This directory contains active operational scripts for Tdarr transcoding automation, gaming-aware scheduling, and system management.

Core Scripts

Gaming-Aware Scheduler

Primary Script: tdarr-schedule-manager.sh Purpose: Comprehensive management interface for gaming-aware Tdarr scheduling

Key Functions:

  • Preset Management: Quick schedule templates (night-only, work-safe, weekend-heavy, gaming-only)
  • Installation: Automated cron job setup and configuration
  • Status Monitoring: Real-time status and logging
  • Configuration: Interactive schedule editing and validation

Usage Patterns:

# Quick setup
./tdarr-schedule-manager.sh preset work-safe
./tdarr-schedule-manager.sh install

# Monitoring
./tdarr-schedule-manager.sh status
./tdarr-schedule-manager.sh logs

# Testing
./tdarr-schedule-manager.sh test

Container Management

Start Script: start-tdarr-gpu-podman-clean.sh Purpose: Launch unmapped Tdarr node with optimized configuration

Key Features:

  • Unmapped Node Configuration: Local cache for optimal performance
  • GPU Support: Full NVIDIA device passthrough
  • Resource Optimization: Direct NVMe cache mapping
  • Clean Architecture: No media volume dependencies

Stop Script: stop-tdarr-gpu-podman.sh Purpose: Graceful container shutdown with cleanup

Scheduling Engine

Core Engine: tdarr-cron-check-configurable.sh Purpose: Minute-by-minute decision engine for Tdarr state management

Decision Logic:

  1. Gaming Detection: Check for active gaming processes
  2. GPU Monitoring: Verify GPU usage below threshold (15%)
  3. Time Window Validation: Ensure current time within allowed schedule
  4. State Management: Start/stop Tdarr based on conditions

Gaming Process Detection:

  • Steam, Lutris, Heroic Games Launcher
  • Wine, Bottles (Windows compatibility layers)
  • GameMode, MangoHUD (gaming utilities)
  • GPU usage monitoring via nvidia-smi

Configuration Management

Config File: tdarr-schedule.conf Purpose: Centralized configuration for scheduler behavior

Configuration Structure:

# Time blocks: "HOUR_START-HOUR_END:DAYS"
SCHEDULE_BLOCKS="22-07:daily 09-17:1-5"

# Gaming detection settings
GPU_THRESHOLD=15
GAMING_PROCESSES="steam lutris heroic wine bottles gamemode mangohud"

# Operational settings
LOG_FILE="/tmp/tdarr-scheduler.log"
CONTAINER_NAME="tdarr-node-gpu"

Operational Patterns

Automated Maintenance

Cron Integration: Two automated systems running simultaneously

  1. Scheduler (every minute): tdarr-cron-check-configurable.sh
  2. Cleanup (every 6 hours): Temporary directory maintenance

Cleanup Automation:

# Removes abandoned transcoding directories
0 */6 * * * find /tmp -name "tdarr-workDir2-*" -type d -mmin +360 -exec rm -rf {} \; 2>/dev/null || true

Logging Strategy

Log Location: /tmp/tdarr-scheduler.log Log Format: Timestamped entries with decision reasoning Log Rotation: Manual cleanup, focused on recent activity

Log Examples:

[2025-08-13 14:30:01] Gaming detected (steam), stopping Tdarr
[2025-08-13 14:35:01] Gaming ended, but outside allowed hours (14:35 not in 22-07:daily)
[2025-08-13 22:00:01] Starting Tdarr (no gaming, within schedule)

System Integration

Gaming Detection: Real-time process monitoring GPU Monitoring: nvidia-smi integration for usage thresholds Container Management: Podman-based lifecycle management Cron Integration: Standard system scheduler for automation

Configuration Presets

Preset Profiles

night-only: "22-07:daily" - Overnight transcoding only work-safe: "22-07:daily 09-17:1-5" - Nights + work hours weekend-heavy: "22-07:daily 09-17:1-5 08-20:6-7" - Maximum time gaming-only: No time limits, gaming detection only

Schedule Format Specification

Format: "HOUR_START-HOUR_END:DAYS" Examples:

  • "22-07:daily" - 10PM to 7AM every day (overnight)
  • "09-17:1-5" - 9AM to 5PM Monday-Friday
  • "14-16:6,7" - 2PM to 4PM Saturday and Sunday
  • "08-20:6-7" - 8AM to 8PM weekends only

Container Architecture

Unmapped Node Configuration

Architecture Choice: Local cache with API-based transfers Benefits: 3-5x performance improvement, reduced network dependency

Container Environment:

-e nodeType=unmapped
-e unmappedNodeCache=/cache
-e enableGpu=true
-e TZ=America/New_York

Volume Configuration:

# Local high-speed cache (NVMe)
-v "/mnt/NV2/tdarr-cache:/cache"

# Configuration persistence  
-v "/mnt/NV2/tdarr-cache-clean:/temp"

# No media volumes (unmapped mode uses API)

Resource Management

GPU Access: Full NVIDIA device passthrough Memory: Controlled by container limits CPU: Shared with host system Storage: Local NVMe for optimal I/O performance

Troubleshooting Context

Common Issues

  1. Gaming Not Detected: Check process names in configuration
  2. Time Window Issues: Verify schedule block format
  3. Container Start Failures: Check GPU device access
  4. Log File Growth: Manual cleanup of scheduler logs

Diagnostic Commands

# Test current conditions
./tdarr-schedule-manager.sh test

# View real-time logs
./tdarr-schedule-manager.sh logs

# Check container status
podman ps | grep tdarr

# Verify GPU access
podman exec tdarr-node-gpu nvidia-smi

Recovery Procedures

# Reset to defaults
./tdarr-schedule-manager.sh preset work-safe

# Reinstall scheduler
./tdarr-schedule-manager.sh install

# Manual container restart
./stop-tdarr-gpu-podman.sh
./start-tdarr-gpu-podman-clean.sh

Integration Points

External Dependencies

  • Podman: Container runtime for node management
  • nvidia-smi: GPU monitoring and device access
  • cron: System scheduler for automation
  • SSH: Remote server access (monitoring scripts)

File System Dependencies

  • Cache Directory: /mnt/NV2/tdarr-cache (local NVMe)
  • Temp Directory: /mnt/NV2/tdarr-cache-clean (processing space)
  • Log Files: /tmp/tdarr-scheduler.log (operational logs)
  • Configuration: Local tdarr-schedule.conf file

Network Dependencies

  • Tdarr Server: API communication for unmapped node operation
  • Discord Webhooks: Optional notification integration (via monitoring)
  • NAS Access: For final file storage (post-processing only)

This operational context provides comprehensive guidance for managing active Tdarr automation scripts in production environments.