CLAUDE: Add media server documentation with Jellyfin setup
- Add CONTEXT.md with GPU transcoding patterns - Add Jellyfin ubuntu-manticore setup guide (10.10.0.226) - Document GPU resource sharing with Tdarr 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
782067344a
commit
5bd5e31798
133
media-servers/CONTEXT.md
Normal file
133
media-servers/CONTEXT.md
Normal file
@ -0,0 +1,133 @@
|
|||||||
|
# Media Servers - Technology Context
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Media server infrastructure for home lab environments, covering streaming services like Jellyfin and Plex with hardware-accelerated transcoding, library management, and client discovery.
|
||||||
|
|
||||||
|
## Current Deployments
|
||||||
|
|
||||||
|
### Jellyfin on ubuntu-manticore
|
||||||
|
- **Location**: 10.10.0.226:8096
|
||||||
|
- **GPU**: NVIDIA GTX 1070 (NVENC/NVDEC)
|
||||||
|
- **Documentation**: `jellyfin-ubuntu-manticore.md`
|
||||||
|
|
||||||
|
### Plex (Existing)
|
||||||
|
- **Location**: TBD (potential migration to ubuntu-manticore)
|
||||||
|
- **Note**: Currently running elsewhere, may migrate for GPU access
|
||||||
|
|
||||||
|
## Architecture Patterns
|
||||||
|
|
||||||
|
### GPU-Accelerated Transcoding
|
||||||
|
**Pattern**: Hardware encoding/decoding for real-time streaming
|
||||||
|
```yaml
|
||||||
|
# Docker Compose GPU passthrough
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: all
|
||||||
|
capabilities: [gpu]
|
||||||
|
environment:
|
||||||
|
- NVIDIA_DRIVER_CAPABILITIES=all
|
||||||
|
- NVIDIA_VISIBLE_DEVICES=all
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Strategy
|
||||||
|
**Pattern**: Tiered storage for different access patterns
|
||||||
|
- **Config**: Local SSD (small, fast database access)
|
||||||
|
- **Cache**: Local NVMe (transcoding temp, thumbnails)
|
||||||
|
- **Media**: Network storage (large capacity, read-only mount)
|
||||||
|
|
||||||
|
### Multi-Service GPU Sharing
|
||||||
|
**Pattern**: Resource allocation when multiple services share GPU
|
||||||
|
- Limit background tasks (Tdarr) to fewer concurrent jobs
|
||||||
|
- Prioritize real-time services (Jellyfin/Plex playback)
|
||||||
|
- Consumer GPUs limited to 2-3 concurrent NVENC sessions
|
||||||
|
|
||||||
|
## Common Configurations
|
||||||
|
|
||||||
|
### NVIDIA GPU Setup
|
||||||
|
```bash
|
||||||
|
# Verify GPU in container
|
||||||
|
docker exec <container> nvidia-smi
|
||||||
|
|
||||||
|
# Check encoder/decoder utilization
|
||||||
|
nvidia-smi dmon -s u
|
||||||
|
```
|
||||||
|
|
||||||
|
### Media Volume Mounts
|
||||||
|
```yaml
|
||||||
|
volumes:
|
||||||
|
- /mnt/truenas/media:/media:ro # Read-only for safety
|
||||||
|
```
|
||||||
|
|
||||||
|
### Client Discovery
|
||||||
|
- **Jellyfin**: UDP 7359
|
||||||
|
- **Plex**: UDP 32410-32414, GDM
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
### Watch History Sync
|
||||||
|
- **Tool**: watchstate (ghcr.io/arabcoders/watchstate)
|
||||||
|
- **Method**: API-based sync between services
|
||||||
|
- **Note**: NFO files do NOT store watch history
|
||||||
|
|
||||||
|
### Tdarr Integration
|
||||||
|
- Tdarr pre-processes media for optimal streaming
|
||||||
|
- Shared GPU resources require coordination
|
||||||
|
- See `tdarr/CONTEXT.md` for transcoding system details
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
1. Use NVMe for cache/transcoding temp directories
|
||||||
|
2. Mount media read-only to prevent accidental modifications
|
||||||
|
3. Enable hardware transcoding for all supported codecs
|
||||||
|
4. Limit concurrent transcodes based on GPU capability
|
||||||
|
|
||||||
|
### Reliability
|
||||||
|
1. Use `restart: unless-stopped` for containers
|
||||||
|
2. Separate config from cache (different failure modes)
|
||||||
|
3. Monitor disk space on cache volumes
|
||||||
|
4. Regular database backups (config directory)
|
||||||
|
|
||||||
|
### Security
|
||||||
|
1. Run containers as non-root (PUID/PGID)
|
||||||
|
2. Use read-only media mounts
|
||||||
|
3. Limit network exposure (internal LAN only)
|
||||||
|
4. Regular container image updates
|
||||||
|
|
||||||
|
## GPU Compatibility Notes
|
||||||
|
|
||||||
|
### NVIDIA Pascal (GTX 10-series)
|
||||||
|
- NVENC: H.264, HEVC (no B-frames for HEVC)
|
||||||
|
- NVDEC: H.264, HEVC, VP8, VP9
|
||||||
|
- Sessions: 2 concurrent (consumer card limit)
|
||||||
|
|
||||||
|
### NVIDIA Turing+ (RTX 20-series and newer)
|
||||||
|
- NVENC: H.264, HEVC (with B-frames), AV1
|
||||||
|
- NVDEC: H.264, HEVC, VP8, VP9, AV1
|
||||||
|
- Sessions: 3+ concurrent
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
1. **No GPU in container**: Check Docker/Podman GPU passthrough config
|
||||||
|
2. **Transcoding failures**: Verify codec support for your GPU generation
|
||||||
|
3. **Slow playback start**: Check network mount performance
|
||||||
|
4. **Cache filling up**: Monitor trickplay/thumbnail generation
|
||||||
|
|
||||||
|
### Diagnostic Commands
|
||||||
|
```bash
|
||||||
|
# GPU status
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Container GPU access
|
||||||
|
docker exec <container> nvidia-smi
|
||||||
|
|
||||||
|
# Encoder/decoder utilization
|
||||||
|
nvidia-smi dmon -s u
|
||||||
|
|
||||||
|
# Container logs
|
||||||
|
docker logs <container> 2>&1 | tail -50
|
||||||
|
```
|
||||||
154
media-servers/jellyfin-ubuntu-manticore.md
Normal file
154
media-servers/jellyfin-ubuntu-manticore.md
Normal file
@ -0,0 +1,154 @@
|
|||||||
|
# Jellyfin Setup on ubuntu-manticore
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Jellyfin media server deployed on ubuntu-manticore (10.10.0.226) with NVIDIA GTX 1070 for hardware-accelerated transcoding.
|
||||||
|
|
||||||
|
**Date**: 2025-12-04
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
ubuntu-manticore (10.10.0.226)
|
||||||
|
└── jellyfin (container)
|
||||||
|
├── Web UI: http://10.10.0.226:8096
|
||||||
|
├── Discovery: UDP 7359
|
||||||
|
├── Config: ~/docker/jellyfin/config/
|
||||||
|
├── Cache: /mnt/NV2/jellyfin-cache (NVMe)
|
||||||
|
├── Media: /mnt/truenas/media (read-only)
|
||||||
|
└── GPU: GTX 1070 (NVENC/NVDEC)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Docker Compose Configuration
|
||||||
|
|
||||||
|
**Location**: `~/docker/jellyfin/docker-compose.yml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
jellyfin:
|
||||||
|
image: jellyfin/jellyfin:latest
|
||||||
|
container_name: jellyfin
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
- PUID=1000
|
||||||
|
- PGID=1000
|
||||||
|
- TZ=America/Chicago
|
||||||
|
- NVIDIA_DRIVER_CAPABILITIES=all
|
||||||
|
- NVIDIA_VISIBLE_DEVICES=all
|
||||||
|
ports:
|
||||||
|
- "8096:8096" # Web UI
|
||||||
|
- "7359:7359/udp" # Client discovery
|
||||||
|
volumes:
|
||||||
|
- ./config:/config
|
||||||
|
- /mnt/NV2/jellyfin-cache:/cache
|
||||||
|
- /mnt/truenas/media:/media:ro
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: all
|
||||||
|
capabilities: [gpu]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Storage Strategy
|
||||||
|
|
||||||
|
| Path | Location | Purpose |
|
||||||
|
|------|----------|---------|
|
||||||
|
| /config | ~/docker/jellyfin/config/ | Database, settings (small, OS drive) |
|
||||||
|
| /cache | /mnt/NV2/jellyfin-cache | Transcoding temp, trickplay images (large, NVMe) |
|
||||||
|
| /media | /mnt/truenas/media | Media library (read-only, network) |
|
||||||
|
|
||||||
|
**Why NVMe for cache:**
|
||||||
|
- Trickplay (timeline thumbnails) can grow to several GB
|
||||||
|
- Chapter images add up across large libraries
|
||||||
|
- Transcoding temp files need fast I/O
|
||||||
|
- Prevents filling up the boot drive
|
||||||
|
|
||||||
|
## GPU Hardware Acceleration
|
||||||
|
|
||||||
|
### Available Encoders
|
||||||
|
| Encoder | Status |
|
||||||
|
|---------|--------|
|
||||||
|
| h264_nvenc | Working |
|
||||||
|
| hevc_nvenc | Working |
|
||||||
|
| av1_nvenc | Working |
|
||||||
|
|
||||||
|
### Available Decoders
|
||||||
|
| Decoder | Status |
|
||||||
|
|---------|--------|
|
||||||
|
| h264_cuvid | Working |
|
||||||
|
| hevc_cuvid | Working |
|
||||||
|
| av1_cuvid | Working |
|
||||||
|
| vp8_cuvid | Working |
|
||||||
|
| vp9_cuvid | Working |
|
||||||
|
|
||||||
|
### Hardware Acceleration Types
|
||||||
|
- CUDA
|
||||||
|
- VAAPI
|
||||||
|
- QSV (not applicable - Intel)
|
||||||
|
- Vulkan
|
||||||
|
- OpenCL
|
||||||
|
|
||||||
|
### Configuring Hardware Transcoding
|
||||||
|
1. Go to **Dashboard** → **Playback** → **Transcoding**
|
||||||
|
2. Select **NVIDIA NVENC** as the hardware acceleration
|
||||||
|
3. Enable desired codecs for hardware decoding/encoding
|
||||||
|
4. Save changes
|
||||||
|
|
||||||
|
## Ports
|
||||||
|
|
||||||
|
| Port | Protocol | Purpose |
|
||||||
|
|------|----------|---------|
|
||||||
|
| 8096 | TCP | Web UI and API |
|
||||||
|
| 7359 | UDP | Client auto-discovery on LAN |
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
1. Access http://10.10.0.226:8096
|
||||||
|
2. Complete the setup wizard:
|
||||||
|
- Create admin account
|
||||||
|
- Add media libraries (Movies: /media/Movies, TV: /media/TV)
|
||||||
|
- Configure metadata providers
|
||||||
|
- Skip NFO savers (not needed unless sharing with Kodi)
|
||||||
|
3. Configure hardware transcoding in Dashboard
|
||||||
|
|
||||||
|
## Resource Sharing with Tdarr
|
||||||
|
|
||||||
|
Both Jellyfin and Tdarr share the GTX 1070. To prevent conflicts:
|
||||||
|
|
||||||
|
- **Tdarr**: Limited to 1 GPU worker
|
||||||
|
- **Jellyfin**: Handles real-time transcoding on-demand
|
||||||
|
- **GTX 1070**: Supports 2 concurrent NVENC sessions (consumer card limit)
|
||||||
|
|
||||||
|
Jellyfin transcodes are more latency-sensitive (users waiting for playback), so Tdarr yields priority by limiting concurrent jobs.
|
||||||
|
|
||||||
|
## Watch History Sync (Future)
|
||||||
|
|
||||||
|
For syncing watch history between Plex and Jellyfin:
|
||||||
|
- Use `ghcr.io/arabcoders/watchstate`
|
||||||
|
- Syncs via API, not NFO files
|
||||||
|
- NFO files don't store watch state
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### GPU Not Detected in Transcoding
|
||||||
|
```bash
|
||||||
|
# Verify GPU access in container
|
||||||
|
docker exec jellyfin nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Available Encoders
|
||||||
|
Container logs show available encoders on startup:
|
||||||
|
```bash
|
||||||
|
docker logs jellyfin 2>&1 | grep -i "available encoders"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Transcoding Failures
|
||||||
|
Check Jellyfin logs in Dashboard → Logs or:
|
||||||
|
```bash
|
||||||
|
docker logs jellyfin 2>&1 | tail -50
|
||||||
|
```
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
- Server inventory: `networking/server-inventory.md`
|
||||||
|
- Tdarr setup: `tdarr/ubuntu-manticore-setup.md`
|
||||||
Loading…
Reference in New Issue
Block a user