- Add CONTEXT.md with GPU transcoding patterns - Add Jellyfin ubuntu-manticore setup guide (10.10.0.226) - Document GPU resource sharing with Tdarr 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
4.0 KiB
4.0 KiB
Jellyfin Setup on ubuntu-manticore
Overview
Jellyfin media server deployed on ubuntu-manticore (10.10.0.226) with NVIDIA GTX 1070 for hardware-accelerated transcoding.
Date: 2025-12-04
Architecture
ubuntu-manticore (10.10.0.226)
└── jellyfin (container)
├── Web UI: http://10.10.0.226:8096
├── Discovery: UDP 7359
├── Config: ~/docker/jellyfin/config/
├── Cache: /mnt/NV2/jellyfin-cache (NVMe)
├── Media: /mnt/truenas/media (read-only)
└── GPU: GTX 1070 (NVENC/NVDEC)
Docker Compose Configuration
Location: ~/docker/jellyfin/docker-compose.yml
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
- NVIDIA_DRIVER_CAPABILITIES=all
- NVIDIA_VISIBLE_DEVICES=all
ports:
- "8096:8096" # Web UI
- "7359:7359/udp" # Client discovery
volumes:
- ./config:/config
- /mnt/NV2/jellyfin-cache:/cache
- /mnt/truenas/media:/media:ro
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
Storage Strategy
| Path | Location | Purpose |
|---|---|---|
| /config | ~/docker/jellyfin/config/ | Database, settings (small, OS drive) |
| /cache | /mnt/NV2/jellyfin-cache | Transcoding temp, trickplay images (large, NVMe) |
| /media | /mnt/truenas/media | Media library (read-only, network) |
Why NVMe for cache:
- Trickplay (timeline thumbnails) can grow to several GB
- Chapter images add up across large libraries
- Transcoding temp files need fast I/O
- Prevents filling up the boot drive
GPU Hardware Acceleration
Available Encoders
| Encoder | Status |
|---|---|
| h264_nvenc | Working |
| hevc_nvenc | Working |
| av1_nvenc | Working |
Available Decoders
| Decoder | Status |
|---|---|
| h264_cuvid | Working |
| hevc_cuvid | Working |
| av1_cuvid | Working |
| vp8_cuvid | Working |
| vp9_cuvid | Working |
Hardware Acceleration Types
- CUDA
- VAAPI
- QSV (not applicable - Intel)
- Vulkan
- OpenCL
Configuring Hardware Transcoding
- Go to Dashboard → Playback → Transcoding
- Select NVIDIA NVENC as the hardware acceleration
- Enable desired codecs for hardware decoding/encoding
- Save changes
Ports
| Port | Protocol | Purpose |
|---|---|---|
| 8096 | TCP | Web UI and API |
| 7359 | UDP | Client auto-discovery on LAN |
Initial Setup
- Access http://10.10.0.226:8096
- Complete the setup wizard:
- Create admin account
- Add media libraries (Movies: /media/Movies, TV: /media/TV)
- Configure metadata providers
- Skip NFO savers (not needed unless sharing with Kodi)
- Configure hardware transcoding in Dashboard
Resource Sharing with Tdarr
Both Jellyfin and Tdarr share the GTX 1070. To prevent conflicts:
- Tdarr: Limited to 1 GPU worker
- Jellyfin: Handles real-time transcoding on-demand
- GTX 1070: Supports 2 concurrent NVENC sessions (consumer card limit)
Jellyfin transcodes are more latency-sensitive (users waiting for playback), so Tdarr yields priority by limiting concurrent jobs.
Watch History Sync (Future)
For syncing watch history between Plex and Jellyfin:
- Use
ghcr.io/arabcoders/watchstate - Syncs via API, not NFO files
- NFO files don't store watch state
Troubleshooting
GPU Not Detected in Transcoding
# Verify GPU access in container
docker exec jellyfin nvidia-smi
Check Available Encoders
Container logs show available encoders on startup:
docker logs jellyfin 2>&1 | grep -i "available encoders"
Transcoding Failures
Check Jellyfin logs in Dashboard → Logs or:
docker logs jellyfin 2>&1 | tail -50
Related Documentation
- Server inventory:
networking/server-inventory.md - Tdarr setup:
tdarr/ubuntu-manticore-setup.md