CLAUDE: Convert Tdarr node from unmapped to mapped configuration
- Updated start-tdarr-gpu-podman-clean.sh to use mapped node with direct media access - Changed container name from tdarr-node-gpu-unmapped to tdarr-node-gpu-mapped - Changed node name from nobara-pc-gpu-unmapped to nobara-pc-gpu-mapped - Updated volume mounts to map TV and Movies directories separately - Preserved NVMe cache and temp directory configurations - Updated documentation to reflect mapped node architecture - Added comparison between mapped and unmapped configurations in examples 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
daedfb298c
commit
db47ee2c07
@ -39,27 +39,52 @@ services:
|
|||||||
|
|
||||||
**Recommended for Fedora/RHEL/CentOS/Nobara systems:**
|
**Recommended for Fedora/RHEL/CentOS/Nobara systems:**
|
||||||
|
|
||||||
|
### Mapped Node (Direct Media Access)
|
||||||
```bash
|
```bash
|
||||||
podman run -d --name tdarr-node-gpu \
|
podman run -d --name tdarr-node-gpu-mapped \
|
||||||
--device nvidia.com/gpu=all \
|
--gpus all \
|
||||||
--restart unless-stopped \
|
--restart unless-stopped \
|
||||||
-e TZ=America/Chicago \
|
-e TZ=America/Chicago \
|
||||||
-e UMASK_SET=002 \
|
-e UMASK_SET=002 \
|
||||||
-e nodeName=local-workstation-gpu \
|
-e nodeName=local-workstation-gpu-mapped \
|
||||||
-e serverIP=10.10.0.43 \
|
-e serverIP=10.10.0.43 \
|
||||||
-e serverPort=8266 \
|
-e serverPort=8266 \
|
||||||
-e inContainer=true \
|
-e inContainer=true \
|
||||||
-e ffmpegVersion=6 \
|
-e ffmpegVersion=6 \
|
||||||
-e NVIDIA_DRIVER_CAPABILITIES=all \
|
-e NVIDIA_DRIVER_CAPABILITIES=all \
|
||||||
-e NVIDIA_VISIBLE_DEVICES=all \
|
-e NVIDIA_VISIBLE_DEVICES=all \
|
||||||
-v ./media:/media \
|
-v /mnt/NV2/tdarr-cache:/cache \
|
||||||
-v ./temp:/temp \
|
-v /mnt/media/TV:/media/TV \
|
||||||
|
-v /mnt/media/Movies:/media/Movies \
|
||||||
|
-v /mnt/media/tdarr/tdarr-cache-clean:/temp \
|
||||||
ghcr.io/haveagitgat/tdarr_node:latest
|
ghcr.io/haveagitgat/tdarr_node:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
**Use case**:
|
### Unmapped Node (Downloads Files)
|
||||||
|
```bash
|
||||||
|
podman run -d --name tdarr-node-gpu-unmapped \
|
||||||
|
--gpus all \
|
||||||
|
--restart unless-stopped \
|
||||||
|
-e TZ=America/Chicago \
|
||||||
|
-e UMASK_SET=002 \
|
||||||
|
-e nodeName=local-workstation-gpu-unmapped \
|
||||||
|
-e serverIP=10.10.0.43 \
|
||||||
|
-e serverPort=8266 \
|
||||||
|
-e inContainer=true \
|
||||||
|
-e ffmpegVersion=6 \
|
||||||
|
-e NVIDIA_DRIVER_CAPABILITIES=all \
|
||||||
|
-e NVIDIA_VISIBLE_DEVICES=all \
|
||||||
|
-v /mnt/NV2/tdarr-cache:/cache \
|
||||||
|
-v /mnt/media:/media \
|
||||||
|
-v /mnt/media/tdarr/tdarr-cache-clean:/temp \
|
||||||
|
ghcr.io/haveagitgat/tdarr_node:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use cases**:
|
||||||
|
- **Mapped**: Direct media access, faster processing, no file downloads
|
||||||
|
- **Unmapped**: Works when network shares aren't available locally
|
||||||
- Hardware video encoding/decoding (NVENC/NVDEC)
|
- Hardware video encoding/decoding (NVENC/NVDEC)
|
||||||
- High-performance transcoding
|
- High-performance transcoding with NVMe cache
|
||||||
- Multiple concurrent streams
|
- Multiple concurrent streams
|
||||||
- Fedora-based systems where Podman works better than Docker
|
- Fedora-based systems where Podman works better than Docker
|
||||||
|
|
||||||
|
|||||||
@ -10,7 +10,7 @@ This system automatically manages your Tdarr transcoding node to avoid conflicts
|
|||||||
|
|
||||||
| File | Purpose |
|
| File | Purpose |
|
||||||
|------|---------|
|
|------|---------|
|
||||||
| `start-tdarr-gpu-podman-clean.sh` | Start Tdarr container with GPU support |
|
| `start-tdarr-gpu-podman-clean.sh` | Start mapped Tdarr container with GPU support |
|
||||||
| `stop-tdarr-gpu-podman.sh` | Stop Tdarr container |
|
| `stop-tdarr-gpu-podman.sh` | Stop Tdarr container |
|
||||||
| `tdarr-cron-check-configurable.sh` | Main scheduler (runs every minute via cron) |
|
| `tdarr-cron-check-configurable.sh` | Main scheduler (runs every minute via cron) |
|
||||||
| `tdarr-schedule-manager.sh` | Management interface and configuration tool |
|
| `tdarr-schedule-manager.sh` | Management interface and configuration tool |
|
||||||
@ -39,17 +39,22 @@ A cron job automatically cleans up abandoned Tdarr transcoding directories:
|
|||||||
|
|
||||||
## 🚀 Quick Start
|
## 🚀 Quick Start
|
||||||
|
|
||||||
1. **Install the scheduler:**
|
1. **Start the mapped Tdarr node:**
|
||||||
|
```bash
|
||||||
|
./start-tdarr-gpu-podman-clean.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Install the scheduler:**
|
||||||
```bash
|
```bash
|
||||||
./tdarr-schedule-manager.sh install
|
./tdarr-schedule-manager.sh install
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Check current status:**
|
3. **Check current status:**
|
||||||
```bash
|
```bash
|
||||||
./tdarr-schedule-manager.sh status
|
./tdarr-schedule-manager.sh status
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Test your current schedule:**
|
4. **Test your current schedule:**
|
||||||
```bash
|
```bash
|
||||||
./tdarr-schedule-manager.sh test
|
./tdarr-schedule-manager.sh test
|
||||||
```
|
```
|
||||||
@ -125,6 +130,13 @@ crontab -e # Delete the tdarr line
|
|||||||
|
|
||||||
## 🏗️ Architecture
|
## 🏗️ Architecture
|
||||||
|
|
||||||
|
**Node Configuration:** Mapped node with direct media access
|
||||||
|
- `/mnt/media/TV:/media/TV` - Direct TV library access
|
||||||
|
- `/mnt/media/Movies:/media/Movies` - Direct Movies library access
|
||||||
|
- `/mnt/NV2/tdarr-cache:/cache` - NVMe cache for optimal performance
|
||||||
|
- `/mnt/media/tdarr/tdarr-cache-clean:/temp` - Temp processing space
|
||||||
|
|
||||||
|
**Scheduler Flow:**
|
||||||
```
|
```
|
||||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||||
│ cron (1min) │───▶│ configurable.sh │───▶│ start/stop.sh │
|
│ cron (1min) │───▶│ configurable.sh │───▶│ start/stop.sh │
|
||||||
|
|||||||
@ -1,15 +1,15 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Tdarr Unmapped Node with GPU Support - NVMe Cache Optimization
|
# Tdarr Mapped Node with GPU Support - NVMe Cache Optimization
|
||||||
# This script starts an unmapped Tdarr node with local NVMe cache
|
# This script starts a mapped Tdarr node with local NVMe cache
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
CONTAINER_NAME="tdarr-node-gpu-unmapped"
|
CONTAINER_NAME="tdarr-node-gpu-mapped"
|
||||||
SERVER_IP="10.10.0.43"
|
SERVER_IP="10.10.0.43"
|
||||||
SERVER_PORT="8266" # Standard server port
|
SERVER_PORT="8266" # Standard server port
|
||||||
NODE_NAME="nobara-pc-gpu-unmapped"
|
NODE_NAME="nobara-pc-gpu-mapped"
|
||||||
|
|
||||||
echo "🚀 Starting UNMAPPED Tdarr Node with GPU support using Podman..."
|
echo "🚀 Starting MAPPED Tdarr Node with GPU support using Podman..."
|
||||||
|
|
||||||
# Stop and remove existing container if it exists
|
# Stop and remove existing container if it exists
|
||||||
if podman ps -a --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
|
if podman ps -a --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
|
||||||
@ -18,12 +18,8 @@ if podman ps -a --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
|
|||||||
podman rm "${CONTAINER_NAME}" 2>/dev/null || true
|
podman rm "${CONTAINER_NAME}" 2>/dev/null || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Create required directories
|
# Start Tdarr node with GPU support - MAPPED VERSION
|
||||||
echo "📁 Creating required directories..."
|
echo "🎬 Starting Mapped Tdarr Node container..."
|
||||||
mkdir -p ./media ./tmp
|
|
||||||
|
|
||||||
# Start Tdarr node with GPU support - CLEAN VERSION
|
|
||||||
echo "🎬 Starting Clean Tdarr Node container..."
|
|
||||||
podman run -d --name "${CONTAINER_NAME}" \
|
podman run -d --name "${CONTAINER_NAME}" \
|
||||||
--gpus all \
|
--gpus all \
|
||||||
--restart unless-stopped \
|
--restart unless-stopped \
|
||||||
@ -32,14 +28,15 @@ podman run -d --name "${CONTAINER_NAME}" \
|
|||||||
-e nodeName="${NODE_NAME}" \
|
-e nodeName="${NODE_NAME}" \
|
||||||
-e serverIP="${SERVER_IP}" \
|
-e serverIP="${SERVER_IP}" \
|
||||||
-e serverPort="${SERVER_PORT}" \
|
-e serverPort="${SERVER_PORT}" \
|
||||||
-e nodeType=unmapped \
|
|
||||||
-e inContainer=true \
|
-e inContainer=true \
|
||||||
-e ffmpegVersion=6 \
|
-e ffmpegVersion=6 \
|
||||||
-e logLevel=DEBUG \
|
-e logLevel=DEBUG \
|
||||||
-e NVIDIA_DRIVER_CAPABILITIES=all \
|
-e NVIDIA_DRIVER_CAPABILITIES=all \
|
||||||
-e NVIDIA_VISIBLE_DEVICES=all \
|
-e NVIDIA_VISIBLE_DEVICES=all \
|
||||||
-v "/mnt/NV2/tdarr-cache:/cache" \
|
-v "/mnt/NV2/tdarr-cache:/cache" \
|
||||||
-v "/mnt/media:/app/unmappedNodeCache/nobara-pc-gpu-unmapped/media" \
|
-v "/mnt/media/TV:/media/TV" \
|
||||||
|
-v "/mnt/media/Movies:/media/Movies" \
|
||||||
|
-v "/mnt/media/tdarr/tdarr-cache-clean:/temp" \
|
||||||
ghcr.io/haveagitgat/tdarr_node:latest
|
ghcr.io/haveagitgat/tdarr_node:latest
|
||||||
|
|
||||||
echo "⏳ Waiting for container to initialize..."
|
echo "⏳ Waiting for container to initialize..."
|
||||||
@ -47,7 +44,7 @@ sleep 5
|
|||||||
|
|
||||||
# Check container status
|
# Check container status
|
||||||
if podman ps --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
|
if podman ps --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
|
||||||
echo "✅ Unmapped Tdarr Node is running successfully!"
|
echo "✅ Mapped Tdarr Node is running successfully!"
|
||||||
echo ""
|
echo ""
|
||||||
echo "📊 Container Status:"
|
echo "📊 Container Status:"
|
||||||
podman ps --filter "name=${CONTAINER_NAME}" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
podman ps --filter "name=${CONTAINER_NAME}" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user