claude-home/examples/docker/tdarr-node-local/docker-compose-gpu.yml
Cal Corum df3d22b218 CLAUDE: Expand documentation system and organize operational scripts
- Add comprehensive Tdarr troubleshooting and GPU transcoding documentation
- Create /scripts directory for active operational scripts
- Archive mapped node example in /examples for reference
- Update CLAUDE.md with scripts directory context triggers
- Add distributed transcoding patterns and NVIDIA troubleshooting guides
- Enhance documentation structure with clear directory usage guidelines

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-09 15:53:09 -05:00

45 lines
1.2 KiB
YAML

version: "3.4"
services:
tdarr-node:
container_name: tdarr-node-local-gpu
image: ghcr.io/haveagitgat/tdarr_node:latest
restart: unless-stopped
environment:
- TZ=America/Chicago
- UMASK_SET=002
- nodeName=local-workstation-gpu
- serverIP=192.168.1.100 # Replace with your Tdarr server IP
- serverPort=8266
- inContainer=true
- ffmpegVersion=6
# NVIDIA environment variables
- NVIDIA_DRIVER_CAPABILITIES=all
- NVIDIA_VISIBLE_DEVICES=all
volumes:
# Media access (same as server)
- /mnt/media:/media # Replace with your media path
# Local transcoding cache
- ./temp:/temp
devices:
- /dev/dri:/dev/dri # Intel/AMD GPU fallback
# GPU configuration - choose ONE method:
# Method 1: Deploy syntax (recommended)
deploy:
resources:
limits:
memory: 16G # GPU transcoding uses less RAM
reservations:
memory: 8G
devices:
- driver: nvidia
count: all
capabilities: [gpu]
# Method 2: Runtime (alternative)
# runtime: nvidia
# Method 3: CDI (future)
# devices:
# - nvidia.com/gpu=all