Complete restructure from patterns/examples/reference to technology-focused directories: • Created technology-specific directories with comprehensive documentation: - /tdarr/ - Transcoding automation with gaming-aware scheduling - /docker/ - Container management with GPU acceleration patterns - /vm-management/ - Virtual machine automation and cloud-init - /networking/ - SSH infrastructure, reverse proxy, and security - /monitoring/ - System health checks and Discord notifications - /databases/ - Database patterns and troubleshooting - /development/ - Programming language patterns (bash, nodejs, python, vuejs) • Enhanced CLAUDE.md with intelligent context loading: - Technology-first loading rules for automatic context provision - Troubleshooting keyword triggers for emergency scenarios - Documentation maintenance protocols with automated reminders - Context window management for optimal documentation updates • Preserved valuable content from .claude/tmp/: - SSH security improvements and server inventory - Tdarr CIFS troubleshooting and Docker iptables solutions - Operational scripts with proper technology classification • Benefits achieved: - Self-contained technology directories with complete context - Automatic loading of relevant documentation based on keywords - Emergency-ready troubleshooting with comprehensive guides - Scalable structure for future technology additions - Eliminated context bloat through targeted loading 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
5.5 KiB
Tdarr Transcoding System - Technology Context
Overview
Tdarr is a distributed transcoding system that converts media files to optimized formats. This implementation uses an intelligent gaming-aware scheduler with unmapped node architecture for optimal performance and system stability.
Architecture Patterns
Distributed Unmapped Node Architecture (Recommended)
Pattern: Server-Node separation with local high-speed cache
- Server: Tdarr Server manages queue, web interface, and coordination
- Node: Unmapped nodes with local NVMe cache for processing
- Benefits: 3-5x performance improvement, network I/O reduction, linear scaling
When to Use:
- Multiple transcoding nodes across network
- High-performance requirements (10GB+ files)
- Network bandwidth limitations
- Gaming systems requiring GPU priority management
Configuration Principles
- Cache Optimization: Use local NVMe storage for work directories
- Gaming Detection: Automatic pause during GPU-intensive activities
- Resource Isolation: Container limits prevent kernel-level crashes
- Monitoring Integration: Automated cleanup and Discord notifications
Core Components
Gaming-Aware Scheduler
Purpose: Automatically manages Tdarr node to avoid conflicts with gaming
Location: scripts/tdarr-schedule-manager.sh
Key Features:
- Detects gaming processes (Steam, Lutris, Wine, etc.)
- GPU usage monitoring (>15% threshold)
- Configurable time windows
- Automated temporary directory cleanup
Schedule Format: "HOUR_START-HOUR_END:DAYS"
"22-07:daily"- Overnight transcoding"09-17:1-5"- Business hours weekdays only"14-16:6,7"- Weekend afternoon window
Monitoring System
Purpose: Prevents staging section timeouts and system instability
Location: scripts/monitoring/tdarr-timeout-monitor.sh
Capabilities:
- Staging timeout detection (300-second hardcoded limit)
- Automatic work directory cleanup
- Discord notifications with user pings
- Log rotation and retention management
Container Architecture
Server Configuration:
# Hybrid storage with resource limits
services:
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
ports: ["8265:8266"]
volumes:
- "./tdarr-data:/app/configs"
- "/mnt/media:/media"
Node Configuration:
# Unmapped node with local cache
podman run -d \
--name tdarr-node-gpu \
-e nodeType=unmapped \
-v "/mnt/NV2/tdarr-cache:/cache" \
--device nvidia.com/gpu=all \
ghcr.io/haveagitgat/tdarr_node:latest
Implementation Patterns
Performance Optimization
- Local Cache Strategy: Download → Process → Upload (vs. streaming)
- Resource Limits: Prevent memory exhaustion and kernel crashes
- Network Resilience: CIFS mount options for stability
- Automated Cleanup: Prevent accumulation of stuck directories
Error Prevention
- Plugin Safety: Null-safe forEach operations
(streams || []).forEach() - Clean Installation: Avoid custom plugin mounts causing version conflicts
- Container Isolation: Resource limits prevent system-level crashes
- Network Stability: Unmapped architecture reduces CIFS dependency
Gaming Integration
- Process Detection: Monitor for gaming applications and utilities
- GPU Threshold: Stop transcoding when GPU usage >15%
- Time Windows: Respect user-defined allowed transcoding hours
- Manual Override: Direct start/stop commands bypass scheduler
Common Workflows
Initial Setup
- Start server with "Allow unmapped Nodes" enabled
- Configure node as unmapped with local cache
- Install gaming-aware scheduler via cron
- Set up monitoring system for automated cleanup
Troubleshooting Patterns
- forEach Errors: Clean plugin installation, avoid custom mounts
- Staging Timeouts: Monitor system handles automatic cleanup
- System Crashes: Convert to unmapped node architecture
- Network Issues: Implement CIFS resilience options
Performance Tuning
- Cache Size: 100-500GB NVMe for concurrent jobs
- Bandwidth: Unmapped nodes reduce streaming requirements
- Scaling: Linear scaling with additional unmapped nodes
- GPU Priority: Gaming detection ensures responsive system
Best Practices
Production Deployment
- Use unmapped node architecture for stability
- Implement comprehensive monitoring
- Configure gaming-aware scheduling for desktop systems
- Set appropriate container resource limits
Development Guidelines
- Test with internal Tdarr test files first
- Implement null-safety checks in custom plugins
- Use structured logging for troubleshooting
- Separate concerns: scheduling, monitoring, processing
Security Considerations
- Container isolation prevents system-level failures
- Resource limits protect against memory exhaustion
- Network mount resilience prevents kernel crashes
- Automated cleanup prevents disk space issues
Migration Patterns
From Mapped to Unmapped Nodes
- Enable "Allow unmapped Nodes" in server options
- Update node configuration (add nodeType=unmapped)
- Change cache volume to local storage
- Remove media volume mapping
- Test workflow and monitor performance
Plugin System Cleanup
- Remove all custom plugin mounts
- Force server restart to regenerate plugin ZIP
- Restart nodes to download fresh plugins
- Verify forEach fixes in downloaded plugins
This technology context provides the foundation for implementing, troubleshooting, and optimizing Tdarr transcoding systems in home lab environments.