Add Pi-hole HA documentation and networking updates
Add dual Pi-hole high availability setup guide, deployment notes, and disk optimization docs. Update NPM + Pi-hole sync script and docs. Add UniFi DNS firewall troubleshooting and networking scripts CONTEXT. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
a35891b565
commit
6c8d199359
111
networking/pihole-disk-optimization.md
Normal file
111
networking/pihole-disk-optimization.md
Normal file
@ -0,0 +1,111 @@
|
||||
# Pi-hole Disk Optimization - 2026-02-06
|
||||
|
||||
## Problem
|
||||
|
||||
Primary Pi-hole (npm-pihole at 10.10.0.16) had critical disk space issues:
|
||||
- **Root filesystem**: 91% full (27GB used of 31GB)
|
||||
- **Pi-hole data**: 3.6GB total
|
||||
- `pihole-FTL.db`: 2.8GB (query log database)
|
||||
- `gravity.db`: 460MB (blocklist database)
|
||||
- `gravity.db.v5.backup`: 114MB (old v5 backup)
|
||||
- **Query retention**: 91 days (default)
|
||||
- **Total queries stored**: 26.4 million
|
||||
|
||||
## Actions Taken
|
||||
|
||||
### 1. Removed Old Backup Database
|
||||
```bash
|
||||
rm /home/cal/container-data/pihole/etc-pihole/gravity.db.v5.backup
|
||||
```
|
||||
**Space freed**: 114MB
|
||||
|
||||
### 2. Cleaned Up Docker Resources
|
||||
```bash
|
||||
docker system prune -af --volumes
|
||||
```
|
||||
**Space freed**: 4.3GB
|
||||
- Removed unused images (old Pi-hole versions, Jellyfin, Foundry, etc.)
|
||||
- Removed stopped containers
|
||||
- Removed unused networks
|
||||
|
||||
### 3. Reduced Query Log Retention
|
||||
```bash
|
||||
echo "database.maxDBdays=7.0" >> /etc/pihole/pihole-FTL.conf
|
||||
docker compose restart
|
||||
```
|
||||
**Configuration**: Changed from 91 days to 7 days
|
||||
**Future savings**: Database will automatically maintain ~7 days of logs instead of 91
|
||||
|
||||
## Results
|
||||
|
||||
| Metric | Before | After | Improvement |
|
||||
|--------|--------|-------|-------------|
|
||||
| Disk Usage | 91% (27GB/31GB) | 73% (22GB/31GB) | -18% |
|
||||
| Free Space | 2.8GB | 8.2GB | +5.4GB |
|
||||
| Total Freed | - | 4.3GB | - |
|
||||
|
||||
## Ongoing Maintenance
|
||||
|
||||
### Automatic Cleanup
|
||||
Pi-hole will now automatically:
|
||||
- Delete queries older than 7 days
|
||||
- Maintain FTL database size around ~300-500MB (instead of 2.8GB)
|
||||
- Keep the most recent week of query logs for troubleshooting
|
||||
|
||||
### Manual Cleanup (if needed)
|
||||
```bash
|
||||
# Flush Pi-hole logs
|
||||
docker exec pihole pihole -f
|
||||
|
||||
# Check disk usage
|
||||
df -h /
|
||||
du -sh /home/cal/container-data/pihole/etc-pihole/*
|
||||
|
||||
# Clean unused Docker resources
|
||||
docker system prune -af --volumes
|
||||
|
||||
# Check query count
|
||||
docker exec pihole pihole-FTL sqlite3 /etc/pihole/pihole-FTL.db 'SELECT COUNT(*) FROM queries;'
|
||||
```
|
||||
|
||||
### Monitoring Recommendations
|
||||
|
||||
**Set up disk space alerts** when usage exceeds 85%:
|
||||
```bash
|
||||
# Add to cron (daily check)
|
||||
0 8 * * * df -h / | awk '$5+0 > 85 {print "Warning: Disk usage at " $5 " on npm-pihole"}' | mail -s "Disk Alert" admin@example.com
|
||||
```
|
||||
|
||||
**Check Pi-hole database size monthly**:
|
||||
```bash
|
||||
du -h /home/cal/container-data/pihole/etc-pihole/pihole-FTL.db
|
||||
```
|
||||
|
||||
## Configuration File
|
||||
|
||||
`/etc/pihole/pihole-FTL.conf`:
|
||||
```conf
|
||||
database.maxDBdays=7.0 # Keep queries for 7 days only
|
||||
```
|
||||
|
||||
## Prevention Tips
|
||||
|
||||
1. **Regular Docker cleanup**: Run `docker system prune` monthly
|
||||
2. **Monitor disk usage**: Check `df -h` weekly
|
||||
3. **Review query retention**: 7 days is sufficient for most troubleshooting
|
||||
4. **Consider disabling query logging** if not needed:
|
||||
```bash
|
||||
pihole logging off
|
||||
```
|
||||
5. **Archive old logs** before major upgrades (like v5→v6)
|
||||
|
||||
## Space Budget Estimates
|
||||
|
||||
With 7-day retention:
|
||||
- **Pi-hole FTL database**: ~300-500MB (vs 2.8GB before)
|
||||
- **Gravity database**: ~460MB (36 blocklists)
|
||||
- **Docker images**: ~2-3GB (active containers only)
|
||||
- **System overhead**: ~20GB
|
||||
- **Recommended free space**: 5GB+ for headroom
|
||||
|
||||
**Current allocation is healthy** at 73% with 8.2GB free.
|
||||
156
networking/pihole-ha-deployment-notes.md
Normal file
156
networking/pihole-ha-deployment-notes.md
Normal file
@ -0,0 +1,156 @@
|
||||
# Pi-hole HA Deployment Notes - 2026-02-06
|
||||
|
||||
## Deployment Summary
|
||||
|
||||
Successfully deployed dual Pi-hole high availability setup with the following configuration:
|
||||
|
||||
### Infrastructure
|
||||
|
||||
**Primary Pi-hole (npm-pihole)**
|
||||
- Host: 10.10.0.16 (LXC container)
|
||||
- Version: Pi-hole v6 (upgraded from v5.18.3)
|
||||
- Web UI: http://10.10.0.16:81/admin
|
||||
- Web Password: newpihole456
|
||||
- App Password: Stored in `~/.claude/secrets/pihole1_app_password`
|
||||
- DNS Port: 53
|
||||
- Blocklists: 36 lists (restored from v5 backup)
|
||||
|
||||
**Secondary Pi-hole (ubuntu-manticore)**
|
||||
- Host: 10.10.0.226 (Physical server)
|
||||
- Version: Pi-hole v6.4
|
||||
- Web UI: http://10.10.0.226:8053/admin
|
||||
- Web Password: pihole123
|
||||
- App Password: Stored in `~/.claude/secrets/pihole2_app_password`
|
||||
- DNS Port: 53
|
||||
- Note: systemd-resolved stub listener disabled
|
||||
|
||||
### What's Working
|
||||
|
||||
✅ **DNS Resolution**
|
||||
- Both Pi-holes responding to DNS queries
|
||||
- Ad blocking functional on both instances
|
||||
- NPM custom DNS sync working (18 domains synced to primary)
|
||||
|
||||
✅ **Network Configuration**
|
||||
- Primary Pi-hole accessible network-wide
|
||||
- Secondary Pi-hole accessible network-wide
|
||||
- systemd-resolved conflicts resolved
|
||||
|
||||
✅ **NPM Integration**
|
||||
- npm-pihole-sync.sh script enhanced for dual Pi-hole support
|
||||
- Script located: `/home/cal/scripts/npm-pihole-sync.sh` on npm-pihole
|
||||
- Hourly cron configured
|
||||
- Syncs 18 proxy host domains to primary Pi-hole
|
||||
|
||||
### Known Issues
|
||||
|
||||
⚠️ **Orbital Sync Authentication Failing**
|
||||
- Orbital Sync v1.8.4 unable to authenticate with Pi-hole v6
|
||||
- App passwords generated but login fails
|
||||
- Location: `~/docker/orbital-sync/` on ubuntu-manticore
|
||||
- Status: Needs further investigation or alternative sync solution
|
||||
|
||||
⚠️ **Secondary Pi-hole NPM Domains**
|
||||
- Custom DNS entries not synced to secondary yet
|
||||
- git.manticorum.com resolves to Cloudflare IPs on secondary
|
||||
- Primary resolves correctly to 10.10.0.16
|
||||
- Impact: Minimal for HA DNS, but local overrides only on primary
|
||||
|
||||
⚠️ **Blocklists Not Synced**
|
||||
- Primary has 36 blocklists restored from v5 backup
|
||||
- Secondary still has default lists only
|
||||
- Orbital Sync would handle this once authentication is fixed
|
||||
|
||||
## v5 → v6 Upgrade Notes
|
||||
|
||||
### Database Migration Issue
|
||||
|
||||
When upgrading Pi-hole from v5 to v6, the gravity database schema changed:
|
||||
- v5 database: 114MB with 36 adlists
|
||||
- v6 fresh database: 108KB with 1 default list
|
||||
|
||||
**Resolution:**
|
||||
1. Backup created automatically: `gravity.db.v5.backup`
|
||||
2. Adlists extracted from backup using Python sqlite3
|
||||
3. All 36 adlist URLs restored via web UI (comma-separated paste)
|
||||
|
||||
**Lesson Learned**: Always export adlists before major version upgrades
|
||||
|
||||
### Authentication Changes
|
||||
|
||||
Pi-hole v6 uses app passwords instead of API tokens:
|
||||
- Generated via: Settings → Web Interface / API → Configure app password
|
||||
- Different from web login password
|
||||
- Required for API access and tools like Orbital Sync
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Immediate
|
||||
1. ✅ Document app password locations (completed)
|
||||
2. ✅ Update .env.example files (completed)
|
||||
3. ✅ Update deployment documentation (completed)
|
||||
|
||||
### Short Term
|
||||
1. **Restore blocklists to secondary** - Manually add 36 adlists via web UI
|
||||
2. **Manually sync NPM domains to secondary** - Update custom.list on secondary
|
||||
3. **Update UniFi DHCP** - Configure DNS1=10.10.0.16, DNS2=10.10.0.226
|
||||
4. **Test failover** - Verify DNS works when primary is down
|
||||
|
||||
### Long Term
|
||||
1. **Investigate Orbital Sync v6 compatibility** - Check for updates or alternatives
|
||||
2. **Consider manual sync script** - Interim solution until Orbital Sync works
|
||||
3. **Monitor Pi-hole v6 releases** - Watch for stability updates
|
||||
|
||||
## File Locations
|
||||
|
||||
### Secrets
|
||||
```
|
||||
~/.claude/secrets/pihole1_app_password # Primary app password
|
||||
~/.claude/secrets/pihole2_app_password # Secondary app password
|
||||
```
|
||||
|
||||
### Server Configs
|
||||
```
|
||||
server-configs/ubuntu-manticore/docker-compose/pihole/
|
||||
server-configs/ubuntu-manticore/docker-compose/orbital-sync/
|
||||
server-configs/networking/scripts/npm-pihole-sync.sh
|
||||
```
|
||||
|
||||
### Runtime Locations
|
||||
```
|
||||
npm-pihole:
|
||||
/home/cal/container-data/pihole/ # Primary Pi-hole data
|
||||
/home/cal/scripts/npm-pihole-sync.sh # NPM sync script
|
||||
/home/cal/container-data/pihole/etc-pihole/gravity.db.v5.backup # v5 backup
|
||||
|
||||
ubuntu-manticore:
|
||||
~/docker/pihole/ # Secondary Pi-hole
|
||||
~/docker/orbital-sync/ # Sync service (not working yet)
|
||||
```
|
||||
|
||||
## Blocklist URLs (36 total)
|
||||
|
||||
Comma-separated for web UI import:
|
||||
```
|
||||
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts,https://blocklistproject.github.io/Lists/ads.txt,https://blocklistproject.github.io/Lists/abuse.txt,https://raw.githubusercontent.com/matomo-org/referrer-spam-blacklist/master/spammers.txt,https://someonewhocares.org/hosts/zero/hosts,https://raw.githubusercontent.com/VeleSila/yhosts/master/hosts,https://winhelp2002.mvps.org/hosts.txt,https://v.firebog.net/hosts/neohostsbasic.txt,https://raw.githubusercontent.com/RooneyMcNibNug/pihole-stuff/master/SNAFU.txt,https://paulgb.github.io/BarbBlock/blacklists/hosts-file.txt,https://v.firebog.net/hosts/static/w3kbl.txt,https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.Spam/hosts,https://raw.githubusercontent.com/PolishFiltersTeam/KADhosts/master/KADhosts.txt,https://v.firebog.net/hosts/Easyprivacy.txt,https://v.firebog.net/hosts/Prigent-Ads.txt,https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.2o7Net/hosts,https://raw.githubusercontent.com/crazy-max/WindowsSpyBlocker/master/data/hosts/spy.txt,https://hostfiles.frogeye.fr/firstparty-trackers-hosts.txt,https://hostfiles.frogeye.fr/multiparty-trackers-hosts.txt,https://www.github.developerdan.com/hosts/lists/ads-and-tracking-extended.txt,https://raw.githubusercontent.com/Perflyst/PiHoleBlocklist/master/android-tracking.txt,https://raw.githubusercontent.com/Perflyst/PiHoleBlocklist/master/SmartTV.txt,https://raw.githubusercontent.com/Perflyst/PiHoleBlocklist/master/AmazonFireTV.txt,https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-blocklist.txt,https://raw.githubusercontent.com/DandelionSprout/adfilt/master/Alternate%20versions%20Anti-Malware%20List/AntiMalwareHosts.txt,https://osint.digitalside.it/Threat-Intel/lists/latestdomains.txt,https://s3.amazonaws.com/lists.disconnect.me/simple_malvertising.txt,https://v.firebog.net/hosts/Prigent-Crypto.txt,https://bitbucket.org/ethanr/dns-blacklists/raw/8575c9f96e5b4a1308f2f12394abd86d0927a4a0/bad_lists/Mandiant_APT1_Report_Appendix_D.txt,https://phishing.army/download/phishing_army_blocklist_extended.txt,https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-malware.txt,https://raw.githubusercontent.com/Spam404/lists/master/main-blacklist.txt,https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.Risk/hosts,https://urlhaus.abuse.ch/downloads/hostfile/,https://v.firebog.net/hosts/Prigent-Malware.txt,https://v.firebog.net/hosts/Shalla-mal.txt
|
||||
```
|
||||
|
||||
## Testing Commands
|
||||
|
||||
```bash
|
||||
# Test DNS on both Pi-holes
|
||||
dig @10.10.0.16 google.com +short
|
||||
dig @10.10.0.226 google.com +short
|
||||
|
||||
# Test ad blocking
|
||||
dig @10.10.0.16 doubleclick.net +short # Should return 0.0.0.0
|
||||
dig @10.10.0.226 doubleclick.net +short # Should return 0.0.0.0
|
||||
|
||||
# Test NPM custom DNS (primary only currently)
|
||||
dig @10.10.0.16 git.manticorum.com +short # Should return 10.10.0.16
|
||||
dig @10.10.0.226 git.manticorum.com +short # Currently returns Cloudflare IPs
|
||||
|
||||
# Check Pi-hole status
|
||||
ssh cal@10.10.0.16 "docker exec pihole pihole status"
|
||||
ssh ubuntu-manticore "docker exec pihole pihole status"
|
||||
```
|
||||
423
networking/pihole-ha-setup.md
Normal file
423
networking/pihole-ha-setup.md
Normal file
@ -0,0 +1,423 @@
|
||||
# Pi-hole High Availability Setup
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
This homelab uses a dual Pi-hole setup for DNS high availability and ad blocking across the network.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ UniFi DHCP Server │
|
||||
│ DNS1: 10.10.0.16 DNS2: 10.10.0.226 │
|
||||
└────────────┬────────────────────────────┬───────────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌────────────────┐ ┌────────────────┐
|
||||
│ npm-pihole │ │ ubuntu- │
|
||||
│ 10.10.0.16 │◄────────►│ manticore │
|
||||
│ │ Orbital │ 10.10.0.226 │
|
||||
│ - NPM │ Sync │ │
|
||||
│ - Pi-hole 1 │ │ - Jellyfin │
|
||||
│ (Primary) │ │ - Tdarr │
|
||||
└────────────────┘ │ - Pi-hole 2 │
|
||||
▲ │ (Secondary) │
|
||||
│ └────────────────┘
|
||||
│
|
||||
┌────────────────┐
|
||||
│ NPM DNS Sync │
|
||||
│ (hourly cron) │
|
||||
│ │
|
||||
│ Syncs proxy │
|
||||
│ hosts to both │
|
||||
│ Pi-holes │
|
||||
└────────────────┘
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
### Primary Pi-hole (npm-pihole)
|
||||
- **Host**: npm-pihole LXC (10.10.0.16)
|
||||
- **Web UI**: http://10.10.0.16/admin
|
||||
- **Role**: Primary DNS server, receives NPM proxy host sync
|
||||
- **Upstream DNS**: Google DNS (8.8.8.8, 8.8.4.4)
|
||||
|
||||
### Secondary Pi-hole (ubuntu-manticore)
|
||||
- **Host**: ubuntu-manticore physical server (10.10.0.226)
|
||||
- **Web UI**: http://10.10.0.226:8053/admin
|
||||
- **Role**: Secondary DNS server, failover
|
||||
- **Upstream DNS**: Google DNS (8.8.8.8, 8.8.4.4)
|
||||
- **Port**: 8053 (web UI) to avoid conflict with Jellyfin on 8096
|
||||
|
||||
### Orbital Sync
|
||||
- **Host**: ubuntu-manticore (co-located with secondary Pi-hole)
|
||||
- **Function**: Synchronizes blocklists, whitelists, and custom DNS entries from primary to secondary
|
||||
- **Sync Interval**: 5 minutes
|
||||
- **Method**: Pi-hole Teleporter API (official backup/restore)
|
||||
|
||||
### NPM DNS Sync
|
||||
- **Host**: npm-pihole (cron job)
|
||||
- **Function**: Syncs NPM proxy host entries to both Pi-holes' custom.list
|
||||
- **Schedule**: Hourly
|
||||
- **Script**: `server-configs/networking/scripts/npm-pihole-sync.sh`
|
||||
|
||||
## Failover Behavior
|
||||
|
||||
### How Client DNS Failover Works
|
||||
|
||||
1. **Normal operation**: Clients query DNS1 (10.10.0.16 - primary)
|
||||
2. **Primary failure**: If primary doesn't respond, client automatically queries DNS2 (10.10.0.226 - secondary)
|
||||
3. **Primary recovery**: Client preference returns to DNS1 when it's available again
|
||||
|
||||
### Failover Timing
|
||||
- **Detection**: 2-5 seconds (client OS dependent)
|
||||
- **Fallback**: Immediate query to secondary DNS
|
||||
- **Impact**: Users typically see no interruption
|
||||
|
||||
### Load Distribution
|
||||
- Most clients prefer DNS1 (primary) by default
|
||||
- Some clients may round-robin between DNS1 and DNS2
|
||||
- Both servers handle queries to distribute load
|
||||
|
||||
## Benefits Over Previous Setup
|
||||
|
||||
### Before (Single Pi-hole + Cloudflare Fallback)
|
||||
- ❌ Single point of failure (Pi-hole down = DNS down)
|
||||
- ❌ iOS devices preferred public DNS (1.1.1.1), bypassing local DNS overrides
|
||||
- ❌ 403 errors accessing internal services (git.manticorum.com) due to NPM ACL restrictions
|
||||
- ❌ No ad blocking when fallback DNS was used
|
||||
|
||||
### After (Dual Pi-hole HA)
|
||||
- ✅ True high availability across separate physical hosts
|
||||
- ✅ DNS survives single host failure
|
||||
- ✅ All devices use Pi-hole for consistent ad blocking
|
||||
- ✅ Local DNS overrides work on all devices (iOS, Android, desktop)
|
||||
- ✅ No 403 errors on internal services
|
||||
- ✅ Automatic synchronization of blocklists and custom DNS entries
|
||||
|
||||
## Deployment Locations
|
||||
|
||||
### Configuration Files
|
||||
```
|
||||
server-configs/
|
||||
├── ubuntu-manticore/
|
||||
│ └── docker-compose/
|
||||
│ ├── pihole/
|
||||
│ │ ├── docker-compose.yml
|
||||
│ │ ├── .env.example
|
||||
│ │ ├── config/ # Pi-hole persistent data
|
||||
│ │ └── dnsmasq/ # dnsmasq configuration
|
||||
│ └── orbital-sync/
|
||||
│ ├── docker-compose.yml
|
||||
│ └── .env.example
|
||||
└── networking/
|
||||
└── scripts/
|
||||
└── npm-pihole-sync.sh # Enhanced for dual Pi-hole support
|
||||
```
|
||||
|
||||
### Runtime Locations
|
||||
```
|
||||
ubuntu-manticore:
|
||||
~/docker/pihole/ # Secondary Pi-hole
|
||||
~/docker/orbital-sync/ # Synchronization service
|
||||
|
||||
npm-pihole:
|
||||
/path/to/pihole/ # Primary Pi-hole (existing)
|
||||
/path/to/npm-sync-cron/ # NPM → Pi-hole sync script
|
||||
```
|
||||
|
||||
## Initial Setup Steps
|
||||
|
||||
### 1. Deploy Secondary Pi-hole (ubuntu-manticore)
|
||||
|
||||
```bash
|
||||
# SSH to ubuntu-manticore
|
||||
ssh ubuntu-manticore
|
||||
|
||||
# Create directory structure
|
||||
mkdir -p ~/docker/pihole ~/docker/orbital-sync
|
||||
|
||||
# Copy docker-compose.yml from repository
|
||||
# (Assume server-configs is synced to host)
|
||||
|
||||
# Create .env file with strong password
|
||||
cd ~/docker/pihole
|
||||
echo "WEBPASSWORD=$(openssl rand -base64 32)" > .env
|
||||
echo "TZ=America/Chicago" >> .env
|
||||
|
||||
# Start Pi-hole
|
||||
docker compose up -d
|
||||
|
||||
# Monitor startup
|
||||
docker logs pihole -f
|
||||
```
|
||||
|
||||
**Note on Pi-hole v6 Upgrades**: If upgrading from v5 to v6, blocklists are not automatically migrated. The v5 database is backed up as `gravity.db.v5.backup`. To restore blocklists, access the web UI and manually add them via Settings → Adlists (multiple lists can be added by comma-separating URLs).
|
||||
|
||||
### 2. Configure Secondary Pi-hole
|
||||
|
||||
```bash
|
||||
# Access web UI: http://10.10.0.226:8053/admin
|
||||
# Login with password from .env file
|
||||
|
||||
# Settings → DNS:
|
||||
# - Upstream DNS: Google DNS (8.8.8.8, 8.8.4.4)
|
||||
# - Enable DNSSEC
|
||||
# - Interface listening behavior: Listen on all interfaces
|
||||
|
||||
# Settings → Privacy:
|
||||
# - Query logging: Enabled
|
||||
# - Privacy level: Show everything (for troubleshooting)
|
||||
|
||||
# Test DNS resolution
|
||||
dig @10.10.0.226 google.com
|
||||
dig @10.10.0.226 doubleclick.net # Should be blocked
|
||||
```
|
||||
|
||||
### 3. Generate App Passwords (Pi-hole v6)
|
||||
|
||||
**Important**: Pi-hole v6 uses app passwords instead of API tokens for authentication.
|
||||
|
||||
```bash
|
||||
# Primary Pi-hole (10.10.0.16:81)
|
||||
# 1. Login to http://10.10.0.16:81/admin
|
||||
# 2. Navigate to: Settings → Web Interface / API → Advanced Settings
|
||||
# 3. Click "Configure app password"
|
||||
# 4. Copy the generated app password
|
||||
# 5. Store in: ~/.claude/secrets/pihole1_app_password
|
||||
|
||||
# Secondary Pi-hole (10.10.0.226:8053)
|
||||
# 1. Login to http://10.10.0.226:8053/admin
|
||||
# 2. Navigate to: Settings → Web Interface / API → Advanced Settings
|
||||
# 3. Click "Configure app password"
|
||||
# 4. Copy the generated app password
|
||||
# 5. Store in: ~/.claude/secrets/pihole2_app_password
|
||||
```
|
||||
|
||||
### 4. Deploy Orbital Sync
|
||||
|
||||
```bash
|
||||
# SSH to ubuntu-manticore
|
||||
cd ~/docker/orbital-sync
|
||||
|
||||
# Create .env file with app passwords from step 3
|
||||
cat > .env << EOF
|
||||
PRIMARY_HOST_PASSWORD=$(cat ~/.claude/secrets/pihole1_app_password)
|
||||
SECONDARY_HOST_PASSWORD=$(cat ~/.claude/secrets/pihole2_app_password)
|
||||
EOF
|
||||
|
||||
# Start Orbital Sync
|
||||
docker compose up -d
|
||||
|
||||
# Monitor initial sync
|
||||
docker logs orbital-sync -f
|
||||
|
||||
# Expected output on success:
|
||||
# "✓ Signed in to http://10.10.0.16:81/admin"
|
||||
# "✓ Signed in to http://127.0.0.1:8053/admin"
|
||||
# "✓ Sync completed successfully"
|
||||
```
|
||||
|
||||
### 5. Update NPM DNS Sync Script
|
||||
|
||||
The script at `server-configs/networking/scripts/npm-pihole-sync.sh` has been enhanced to sync to both Pi-holes:
|
||||
|
||||
```bash
|
||||
# Test the updated script
|
||||
ssh npm-pihole "/path/to/npm-pihole-sync.sh --dry-run"
|
||||
|
||||
# Verify both Pi-holes receive entries
|
||||
ssh npm-pihole "docker exec pihole cat /etc/pihole/custom.list | grep git.manticorum.com"
|
||||
ssh ubuntu-manticore "docker exec pihole cat /etc/pihole/custom.list | grep git.manticorum.com"
|
||||
```
|
||||
|
||||
### 6. Update UniFi DHCP Configuration
|
||||
|
||||
```
|
||||
1. Access UniFi Network Controller
|
||||
2. Navigate to: Settings → Networks → LAN → DHCP
|
||||
3. Set DHCP DNS Server: Manual
|
||||
4. DNS Server 1: 10.10.0.16 (primary Pi-hole)
|
||||
5. DNS Server 2: 10.10.0.226 (secondary Pi-hole)
|
||||
6. Remove any public DNS servers (1.1.1.1, etc.)
|
||||
7. Save and apply
|
||||
```
|
||||
|
||||
**Note**: Clients will pick up new DNS servers on next DHCP lease renewal (typically 24 hours) or manually renew:
|
||||
- Windows: `ipconfig /release && ipconfig /renew`
|
||||
- macOS/Linux: `sudo dhclient -r && sudo dhclient` or reconnect to WiFi
|
||||
- iOS/Android: Forget network and reconnect
|
||||
|
||||
## Testing Procedures
|
||||
|
||||
### DNS Resolution Tests
|
||||
|
||||
```bash
|
||||
# Test both Pi-holes respond
|
||||
dig @10.10.0.16 google.com
|
||||
dig @10.10.0.226 google.com
|
||||
|
||||
# Test ad blocking works on both
|
||||
dig @10.10.0.16 doubleclick.net
|
||||
dig @10.10.0.226 doubleclick.net
|
||||
|
||||
# Test custom DNS entries (from NPM sync)
|
||||
dig @10.10.0.16 git.manticorum.com
|
||||
dig @10.10.0.226 git.manticorum.com
|
||||
```
|
||||
|
||||
### Failover Tests
|
||||
|
||||
```bash
|
||||
# Test 1: Primary Pi-hole failure
|
||||
ssh npm-pihole "docker stop pihole"
|
||||
dig google.com # Should still resolve via secondary
|
||||
ssh npm-pihole "docker start pihole"
|
||||
|
||||
# Test 2: Secondary Pi-hole failure
|
||||
ssh ubuntu-manticore "docker stop pihole"
|
||||
dig google.com # Should still resolve via primary
|
||||
ssh ubuntu-manticore "docker start pihole"
|
||||
|
||||
# Test 3: iOS device access to internal services
|
||||
# From iPhone, access: https://git.manticorum.com
|
||||
# Expected: 200 OK (no 403 errors)
|
||||
# NPM logs should show local IP (10.0.0.x) not public IP
|
||||
```
|
||||
|
||||
### Orbital Sync Validation
|
||||
|
||||
```bash
|
||||
# Add test blocklist to primary Pi-hole
|
||||
# Web UI → Adlists → Add: https://example.com/blocklist.txt
|
||||
|
||||
# Wait 5 minutes for sync
|
||||
|
||||
# Check secondary Pi-hole
|
||||
# Web UI → Adlists → Should see same blocklist
|
||||
|
||||
# Check sync logs
|
||||
ssh ubuntu-manticore "docker logs orbital-sync --tail 50"
|
||||
```
|
||||
|
||||
### NPM DNS Sync Validation
|
||||
|
||||
```bash
|
||||
# Add new NPM proxy host (e.g., test.manticorum.com)
|
||||
|
||||
# Wait for hourly cron sync
|
||||
|
||||
# Verify both Pi-holes have the entry
|
||||
ssh npm-pihole "docker exec pihole cat /etc/pihole/custom.list | grep test.manticorum.com"
|
||||
ssh ubuntu-manticore "docker exec pihole cat /etc/pihole/custom.list | grep test.manticorum.com"
|
||||
|
||||
# Test DNS resolution
|
||||
dig test.manticorum.com
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check Pi-hole containers are running
|
||||
ssh npm-pihole "docker ps | grep pihole"
|
||||
ssh ubuntu-manticore "docker ps | grep pihole"
|
||||
|
||||
# Check Orbital Sync is running
|
||||
ssh ubuntu-manticore "docker ps | grep orbital-sync"
|
||||
|
||||
# Check DNS response times
|
||||
time dig @10.10.0.16 google.com
|
||||
time dig @10.10.0.226 google.com
|
||||
```
|
||||
|
||||
### Resource Usage
|
||||
|
||||
```bash
|
||||
# Pi-hole typically uses <1% CPU and ~150MB RAM
|
||||
ssh ubuntu-manticore "docker stats pihole --no-stream"
|
||||
|
||||
# Verify no impact on Jellyfin/Tdarr
|
||||
ssh ubuntu-manticore "docker stats jellyfin tdarr --no-stream"
|
||||
```
|
||||
|
||||
### Query Logs
|
||||
|
||||
- **Primary**: http://10.10.0.16/admin → Query Log
|
||||
- **Secondary**: http://10.10.0.226:8053/admin → Query Log
|
||||
- Look for balanced query distribution across both servers
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
See `networking/troubleshooting.md` for detailed Pi-hole HA troubleshooting scenarios.
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Secondary Pi-hole not receiving queries
|
||||
- Check UniFi DHCP settings (DNS2 should be 10.10.0.226)
|
||||
- Force DHCP lease renewal on test client
|
||||
- Verify Pi-hole is listening on port 53: `netstat -tulpn | grep :53`
|
||||
|
||||
**Issue**: Orbital Sync not syncing
|
||||
- Check container logs: `docker logs orbital-sync`
|
||||
- Verify API tokens are correct in `.env`
|
||||
- Test API access manually: `curl -H "Authorization: Token <api_token>" http://10.10.0.16/admin/api.php?status`
|
||||
|
||||
**Issue**: NPM domains not appearing in secondary Pi-hole
|
||||
- Check npm-pihole-sync.sh script logs
|
||||
- Verify SSH access from npm-pihole to ubuntu-manticore
|
||||
- Manually trigger sync: `ssh npm-pihole "/path/to/npm-pihole-sync.sh"`
|
||||
|
||||
**Issue**: iOS devices still getting 403 on internal services
|
||||
- Verify UniFi DHCP no longer has public DNS (1.1.1.1)
|
||||
- Force iOS device to renew DHCP (forget network and reconnect)
|
||||
- Check iOS DNS settings: Settings → WiFi → (i) → DNS (should show 10.10.0.16)
|
||||
- Test DNS resolution from iOS: Use DNS test app or `nslookup git.manticorum.com`
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Updating Pi-hole
|
||||
|
||||
```bash
|
||||
# Primary Pi-hole
|
||||
ssh npm-pihole "docker compose pull && docker compose up -d"
|
||||
|
||||
# Secondary Pi-hole
|
||||
ssh ubuntu-manticore "cd ~/docker/pihole && docker compose pull && docker compose up -d"
|
||||
|
||||
# Orbital Sync
|
||||
ssh ubuntu-manticore "cd ~/docker/orbital-sync && docker compose pull && docker compose up -d"
|
||||
```
|
||||
|
||||
### Backup and Recovery
|
||||
|
||||
```bash
|
||||
# Pi-hole Teleporter backups (automatic via Orbital Sync)
|
||||
# Manual backup from web UI: Settings → Teleporter → Backup
|
||||
|
||||
# Docker volume backup
|
||||
ssh ubuntu-manticore "tar -czf ~/pihole-backup-$(date +%Y%m%d).tar.gz ~/docker/pihole/config"
|
||||
|
||||
# Restore
|
||||
ssh ubuntu-manticore "tar -xzf ~/pihole-backup-YYYYMMDD.tar.gz -C ~/"
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Expected Behavior
|
||||
- **Query response time**: <50ms on LAN
|
||||
- **CPU usage**: <1% per Pi-hole instance
|
||||
- **RAM usage**: ~150MB per Pi-hole instance
|
||||
- **Sync latency**: 5 minutes (Orbital Sync interval)
|
||||
- **NPM sync latency**: Up to 1 hour (cron schedule)
|
||||
|
||||
### Capacity
|
||||
- Both Pi-holes can easily handle 1000+ queries/minute
|
||||
- No impact on ubuntu-manticore's Jellyfin/Tdarr GPU operations
|
||||
- Orbital Sync overhead is negligible (<10MB RAM, <0.1% CPU)
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **NPM + Pi-hole Integration**: `server-configs/networking/nginx-proxy-manager-pihole.md`
|
||||
- **Network Troubleshooting**: `networking/troubleshooting.md`
|
||||
- **ubuntu-manticore Setup**: `media-servers/jellyfin-ubuntu-manticore.md`
|
||||
- **Orbital Sync Documentation**: https://github.com/mattwebbio/orbital-sync
|
||||
241
networking/scripts/CONTEXT.md
Normal file
241
networking/scripts/CONTEXT.md
Normal file
@ -0,0 +1,241 @@
|
||||
# Networking Scripts - Operational Context
|
||||
|
||||
## Script Overview
|
||||
This directory contains active operational scripts for SSH key management, network security maintenance, and connectivity automation.
|
||||
|
||||
## Core Scripts
|
||||
|
||||
### SSH Key Maintenance
|
||||
**Script**: `ssh_key_maintenance.sh`
|
||||
**Purpose**: Comprehensive SSH key backup, age analysis, and rotation management
|
||||
|
||||
**Key Functions**:
|
||||
- **Automated Backup**: Timestamped backups of SSH keys, config, and known_hosts
|
||||
- **Age Analysis**: Identifies keys older than 180/365 days and recommends rotation
|
||||
- **Validity Testing**: Verifies key integrity with ssh-keygen
|
||||
- **Backup Cleanup**: Maintains only the last 10 backups
|
||||
- **Maintenance Reports**: Generates comprehensive status reports with connection tests
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
# Run manual maintenance check
|
||||
./ssh_key_maintenance.sh
|
||||
|
||||
# Recommended: Schedule monthly via cron
|
||||
0 2 1 * * /mnt/NV2/Development/claude-home/networking/scripts/ssh_key_maintenance.sh
|
||||
```
|
||||
|
||||
**Output Locations**:
|
||||
- **Backup Directory**: `/mnt/NV2/ssh-keys/maintenance-YYYYMMDD-HHMMSS/`
|
||||
- **Maintenance Report**: `$BACKUP_DIR/MAINTENANCE_REPORT.md`
|
||||
|
||||
**Key Age Thresholds**:
|
||||
- ✅ **< 180 days**: OK
|
||||
- ⚡ **180-365 days**: Consider rotation
|
||||
- ⚠️ **> 365 days**: Rotation recommended
|
||||
|
||||
**Backup Retention**: Last 10 backups kept, older ones automatically removed
|
||||
|
||||
## Operational Patterns
|
||||
|
||||
### Automated Maintenance Schedule
|
||||
|
||||
**Recommended Frequency**: Monthly (1st of month at 2 AM)
|
||||
|
||||
**Cron Configuration**:
|
||||
```bash
|
||||
# SSH key maintenance - Monthly backup and analysis
|
||||
0 2 1 * * /mnt/NV2/Development/claude-home/networking/scripts/ssh_key_maintenance.sh >> /var/log/ssh-key-maintenance.log 2>&1
|
||||
```
|
||||
|
||||
**Manual Execution**:
|
||||
- Before major infrastructure changes
|
||||
- After key rotation
|
||||
- When troubleshooting SSH connectivity issues
|
||||
- Quarterly security audits
|
||||
|
||||
### Key Rotation Workflow
|
||||
|
||||
**When Rotation Recommended**:
|
||||
1. Run maintenance script to create pre-rotation backup
|
||||
2. Generate new SSH key pair:
|
||||
```bash
|
||||
ssh-keygen -t rsa -b 4096 -f ~/.ssh/new_key_rsa -C "description"
|
||||
```
|
||||
3. Deploy new public key to servers
|
||||
4. Test connectivity with new key
|
||||
5. Update SSH config to use new key
|
||||
6. Archive old key (don't delete immediately)
|
||||
7. Run maintenance script again to verify
|
||||
8. Remove old public keys from servers after grace period (30 days)
|
||||
|
||||
### Backup Strategy
|
||||
|
||||
**Backup Contents**:
|
||||
- All `*_rsa` and `*_rsa.pub` files from ~/.ssh/
|
||||
- SSH config file (`~/.ssh/config`)
|
||||
- Known hosts file (`~/.ssh/known_hosts`)
|
||||
|
||||
**Storage Location**: `/mnt/NV2/ssh-keys/`
|
||||
- Persistent NAS storage
|
||||
- Protected with 700 permissions
|
||||
- Timestamped directories for point-in-time recovery
|
||||
|
||||
**Retention Policy**:
|
||||
- Keep last 10 backups (approximately 10 months with monthly schedule)
|
||||
- Older backups automatically pruned
|
||||
- Manual backups preserved if named differently
|
||||
|
||||
## Configuration Dependencies
|
||||
|
||||
### Required Directories
|
||||
- **NAS Mount**: `/mnt/NV2` must be mounted and accessible
|
||||
- **Backup Root**: `/mnt/NV2/ssh-keys/` (created automatically)
|
||||
- **SSH Directory**: `~/.ssh/` with appropriate permissions (700)
|
||||
|
||||
### SSH Key Types Supported
|
||||
- **RSA keys**: `*_rsa` and `*_rsa.pub`
|
||||
- **Config files**: `~/.ssh/config`
|
||||
- **Known hosts**: `~/.ssh/known_hosts`
|
||||
|
||||
**Note**: Script currently focuses on RSA keys. For Ed25519 or other key types, script modification needed.
|
||||
|
||||
### Server Connection Tests
|
||||
|
||||
**Maintenance Report Includes**:
|
||||
- Primary server connectivity tests (database-apis, pihole, akamai)
|
||||
- Emergency key connectivity tests (homelab, cloud)
|
||||
- Suggested commands for manual verification
|
||||
|
||||
**Test Targets** (from maintenance report):
|
||||
```bash
|
||||
# Primary Keys
|
||||
ssh -o ConnectTimeout=5 database-apis 'echo "DB APIs: OK"'
|
||||
ssh -o ConnectTimeout=5 pihole 'echo "PiHole: OK"'
|
||||
ssh -o ConnectTimeout=5 akamai 'echo "Akamai: OK"'
|
||||
|
||||
# Emergency Keys
|
||||
ssh -i ~/.ssh/emergency_homelab_rsa -o ConnectTimeout=5 cal@10.10.0.16 'echo "Emergency Home: OK"'
|
||||
ssh -i ~/.ssh/emergency_cloud_rsa -o ConnectTimeout=5 root@172.237.147.99 'echo "Emergency Cloud: OK"'
|
||||
```
|
||||
|
||||
## Troubleshooting Context
|
||||
|
||||
### Common Issues
|
||||
|
||||
**1. NAS Not Mounted**
|
||||
- **Symptom**: Script exits with "NAS not mounted at /mnt/NV2"
|
||||
- **Solution**: Verify NAS mount before running script
|
||||
```bash
|
||||
mount | grep /mnt/NV2
|
||||
ls -la /mnt/NV2/
|
||||
```
|
||||
|
||||
**2. Permission Denied on Backup**
|
||||
- **Symptom**: Cannot create backup directory
|
||||
- **Solution**: Check permissions on `/mnt/NV2/ssh-keys/`
|
||||
```bash
|
||||
mkdir -p /mnt/NV2/ssh-keys/
|
||||
chmod 700 /mnt/NV2/ssh-keys/
|
||||
```
|
||||
|
||||
**3. Key Corruption Detected**
|
||||
- **Symptom**: Script reports "CORRUPTED or unreadable"
|
||||
- **Solution**: Restore from latest backup and regenerate if necessary
|
||||
```bash
|
||||
# Restore from backup
|
||||
cp /mnt/NV2/ssh-keys/maintenance-LATEST/key_rsa ~/.ssh/
|
||||
chmod 600 ~/.ssh/key_rsa
|
||||
|
||||
# Verify
|
||||
ssh-keygen -l -f ~/.ssh/key_rsa
|
||||
```
|
||||
|
||||
**4. Backup Cleanup Fails**
|
||||
- **Symptom**: Old backups not being removed
|
||||
- **Solution**: Manual cleanup or check directory permissions
|
||||
```bash
|
||||
cd /mnt/NV2/ssh-keys/
|
||||
ls -dt maintenance-* | tail -n +11
|
||||
# Manually remove old backups if needed
|
||||
```
|
||||
|
||||
### Diagnostic Commands
|
||||
|
||||
```bash
|
||||
# Check script execution
|
||||
bash -x ./ssh_key_maintenance.sh
|
||||
|
||||
# Verify NAS mount
|
||||
df -h | grep NV2
|
||||
|
||||
# List all backups
|
||||
ls -lah /mnt/NV2/ssh-keys/
|
||||
|
||||
# Check key permissions
|
||||
ls -la ~/.ssh/*_rsa*
|
||||
|
||||
# Test key validity
|
||||
ssh-keygen -l -f ~/.ssh/key_rsa
|
||||
|
||||
# View latest maintenance report
|
||||
cat /mnt/NV2/ssh-keys/maintenance-*/MAINTENANCE_REPORT.md | tail -1
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
### External Dependencies
|
||||
- **NAS Storage**: `/mnt/NV2` must be mounted
|
||||
- **SSH Tools**: `ssh-keygen`, `ssh` commands
|
||||
- **cron**: For automated scheduling (optional)
|
||||
- **bash**: Script requires bash shell
|
||||
|
||||
### File System Dependencies
|
||||
- **Backup Storage**: `/mnt/NV2/ssh-keys/` (persistent NAS)
|
||||
- **SSH Directory**: `~/.ssh/` (local user directory)
|
||||
- **Log Files**: Optional log redirection for cron jobs
|
||||
|
||||
### Network Dependencies
|
||||
- **NAS Access**: Network connectivity to NAS storage
|
||||
- **SSH Servers**: For connection testing in maintenance report
|
||||
- **DNS Resolution**: For hostname-based SSH connections
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Backup Security
|
||||
- Backup directories set to 700 permissions (owner-only access)
|
||||
- Private keys never transmitted over network during backup
|
||||
- Backups stored on trusted NAS with access controls
|
||||
|
||||
### Key Rotation Best Practices
|
||||
- **Annual rotation**: Recommended for primary keys
|
||||
- **Bi-annual rotation**: Emergency/high-privilege keys
|
||||
- **Grace period**: Keep old keys active for 30 days during transition
|
||||
- **Test before removal**: Verify new key access before decommissioning old keys
|
||||
|
||||
### Access Control
|
||||
- Script must run as user who owns SSH keys
|
||||
- NAS permissions must allow user write access
|
||||
- Maintenance reports contain sensitive connection information
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
**Planned Improvements**:
|
||||
- Support for Ed25519 and other modern key types
|
||||
- Automated key rotation workflow
|
||||
- Discord/webhook notifications for rotation reminders
|
||||
- Integration with centralized secrets management
|
||||
- Automated deployment of new public keys to servers
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **Technology Overview**: `/networking/CONTEXT.md`
|
||||
- **Troubleshooting**: `/networking/troubleshooting.md`
|
||||
- **Examples**: `/networking/examples/` - SSH configuration patterns
|
||||
- **Main Instructions**: `/CLAUDE.md` - Context loading rules
|
||||
|
||||
## Notes
|
||||
|
||||
This script is critical for maintaining SSH key security and ensuring backup availability for disaster recovery. Regular execution (monthly recommended) helps identify aging keys before they become a security risk and maintains a backup history for point-in-time recovery.
|
||||
|
||||
The maintenance report format is designed to be both human-readable and actionable, providing specific commands for connectivity testing and clear recommendations for key rotation.
|
||||
@ -117,6 +117,106 @@ sudo systemctl restart systemd-resolved
|
||||
sudo systemd-resolve --flush-caches
|
||||
```
|
||||
|
||||
### UniFi Firewall Blocking DNS to New Networks
|
||||
**Symptoms**: New network/VLAN has "no internet access" - devices connect to WiFi but cannot browse or resolve domain names. Ping to IP addresses (8.8.8.8) works, but DNS resolution fails.
|
||||
|
||||
**Root Cause**: Firewall rules blocking traffic from DNS servers (Pi-holes in "Servers" network group) to new networks. Rules like "Servers to WiFi" or "Servers to Home" with DROP action block ALL traffic including DNS responses on port 53.
|
||||
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# From affected device on new network:
|
||||
|
||||
# Test if routing works (should succeed)
|
||||
ping 8.8.8.8
|
||||
traceroute 8.8.8.8
|
||||
|
||||
# Test if DNS resolution works (will fail)
|
||||
nslookup google.com
|
||||
|
||||
# Test DNS servers directly (will timeout or fail)
|
||||
nslookup google.com 10.10.0.16
|
||||
nslookup google.com 10.10.0.226
|
||||
|
||||
# Test public DNS (should work)
|
||||
nslookup google.com 8.8.8.8
|
||||
|
||||
# Check DHCP-assigned DNS servers
|
||||
# Windows:
|
||||
ipconfig /all | findstr DNS
|
||||
|
||||
# Linux/macOS:
|
||||
cat /etc/resolv.conf
|
||||
```
|
||||
|
||||
**If routing works but DNS fails**, the issue is firewall blocking DNS traffic, not network configuration.
|
||||
|
||||
**Solutions**:
|
||||
|
||||
**Step 1: Identify Blocking Rules**
|
||||
- In UniFi: Settings → Firewall & Security → Traffic Rules → LAN In
|
||||
- Look for DROP rules with:
|
||||
- Source: Servers (or network group containing Pi-holes)
|
||||
- Destination: Your new network (e.g., "Home WiFi", "Home Network")
|
||||
- Examples: "Servers to WiFi", "Servers to Home"
|
||||
|
||||
**Step 2: Create DNS Allow Rules (BEFORE Drop Rules)**
|
||||
|
||||
Create new rules positioned ABOVE the drop rules:
|
||||
|
||||
```
|
||||
Name: Allow DNS - Servers to [Network Name]
|
||||
Action: Accept
|
||||
Rule Applied: Before Predefined Rules
|
||||
Type: LAN In
|
||||
Protocol: TCP and UDP
|
||||
Source:
|
||||
- Network/Group: Servers (or specific Pi-hole IPs: 10.10.0.16, 10.10.0.226)
|
||||
- Port: Any
|
||||
Destination:
|
||||
- Network: [Your new network - e.g., Home WiFi]
|
||||
- Port: 53 (DNS)
|
||||
```
|
||||
|
||||
Repeat for each network that needs DNS access from servers.
|
||||
|
||||
**Step 3: Verify Rule Order**
|
||||
|
||||
**CRITICAL**: Firewall rules process top-to-bottom, first match wins!
|
||||
|
||||
Correct order:
|
||||
```
|
||||
✅ Allow DNS - Servers to Home Network (Accept, Port 53)
|
||||
✅ Allow DNS - Servers to Home WiFi (Accept, Port 53)
|
||||
❌ Servers to Home (Drop, All ports)
|
||||
❌ Servers to WiFi (Drop, All ports)
|
||||
```
|
||||
|
||||
**Step 4: Re-enable Drop Rules**
|
||||
|
||||
Once DNS allow rules are in place and positioned correctly, re-enable the drop rules.
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# From device on new network:
|
||||
|
||||
# DNS should work
|
||||
nslookup google.com
|
||||
|
||||
# Browsing should work
|
||||
ping google.com
|
||||
|
||||
# Other server traffic should still be blocked (expected)
|
||||
ping 10.10.0.16 # Should fail or timeout
|
||||
ssh 10.10.0.16 # Should be blocked
|
||||
```
|
||||
|
||||
**Real-World Example**: New "Home WiFi" network (10.1.0.0/24, VLAN 2)
|
||||
- **Problem**: Devices connected but couldn't browse web
|
||||
- **Diagnosis**: `traceroute 8.8.8.8` worked (16ms), but `nslookup google.com` failed
|
||||
- **Cause**: Firewall rule "Servers to WiFi" (rule 20004) blocked Pi-hole DNS responses
|
||||
- **Solution**: Added "Allow DNS - Servers to Home WiFi" rule (Accept, port 53) above drop rule
|
||||
- **Result**: DNS resolution works, other server traffic remains properly blocked
|
||||
|
||||
## Reverse Proxy and Load Balancer Issues
|
||||
|
||||
### Nginx Configuration Problems
|
||||
@ -171,6 +271,204 @@ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
|
||||
-out /etc/ssl/certs/selfsigned.crt
|
||||
```
|
||||
|
||||
### Intermittent SSL Errors (ERR_SSL_UNRECOGNIZED_NAME_ALERT)
|
||||
**Symptoms**: SSL errors that work sometimes but fail other times, `ERR_SSL_UNRECOGNIZED_NAME_ALERT` in browser, connection works from internal network intermittently
|
||||
|
||||
**Root Cause**: IPv6/IPv4 DNS conflicts where public DNS returns Cloudflare IPv6 addresses while local DNS (Pi-hole) only overrides IPv4. Modern systems prefer IPv6, causing intermittent failures when IPv6 connection attempts fail.
|
||||
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check for multiple DNS records (IPv4 + IPv6)
|
||||
nslookup domain.example.com 10.10.0.16
|
||||
dig domain.example.com @10.10.0.16
|
||||
|
||||
# Compare with public DNS
|
||||
host domain.example.com 8.8.8.8
|
||||
|
||||
# Test IPv6 vs IPv4 connectivity
|
||||
curl -6 -I https://domain.example.com # IPv6 (may fail)
|
||||
curl -4 -I https://domain.example.com # IPv4 (should work)
|
||||
|
||||
# Check if system has IPv6 connectivity
|
||||
ip -6 addr show | grep global
|
||||
```
|
||||
|
||||
**Example Problem**:
|
||||
```bash
|
||||
# Local Pi-hole returns:
|
||||
domain.example.com → 10.10.0.16 (IPv4 internal NPM)
|
||||
|
||||
# Public DNS also returns:
|
||||
domain.example.com → 2606:4700:... (Cloudflare IPv6)
|
||||
|
||||
# System tries IPv6 first → fails
|
||||
# Sometimes falls back to IPv4 → works
|
||||
# Result: Intermittent SSL errors
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
|
||||
**Option 1: Add IPv6 Local DNS Override** (Recommended)
|
||||
```bash
|
||||
# Add non-routable IPv6 address to Pi-hole custom.list
|
||||
ssh pihole "docker exec pihole bash -c 'echo \"fe80::1 domain.example.com\" >> /etc/pihole/custom.list'"
|
||||
|
||||
# Restart Pi-hole DNS
|
||||
ssh pihole "docker exec pihole pihole restartdns"
|
||||
|
||||
# Verify fix
|
||||
nslookup domain.example.com 10.10.0.16
|
||||
# Should show: 10.10.0.16 (IPv4) and fe80::1 (IPv6 link-local)
|
||||
```
|
||||
|
||||
**Option 2: Remove Cloudflare DNS Records** (If public access not needed)
|
||||
```bash
|
||||
# In Cloudflare dashboard:
|
||||
# - Turn off orange cloud (proxy) for the domain
|
||||
# - Or delete A/AAAA records entirely
|
||||
|
||||
# This removes Cloudflare IPs from public DNS
|
||||
```
|
||||
|
||||
**Option 3: Disable IPv6 on Client** (Temporary testing)
|
||||
```bash
|
||||
# Disable IPv6 temporarily to confirm diagnosis
|
||||
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
|
||||
|
||||
# Test domain - should work consistently now
|
||||
|
||||
# Re-enable when done testing
|
||||
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
|
||||
```
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# After applying fix, verify consistent resolution
|
||||
for i in {1..10}; do
|
||||
echo "Test $i:"
|
||||
curl -I https://domain.example.com 2>&1 | grep -E "(HTTP|SSL|certificate)"
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# All attempts should succeed consistently
|
||||
```
|
||||
|
||||
**Real-World Example**: git.manticorum.com
|
||||
- **Problem**: Intermittent SSL errors from internal network (10.0.0.0/24)
|
||||
- **Diagnosis**: Pi-hole had IPv4 override (10.10.0.16) but public DNS returned Cloudflare IPv6
|
||||
- **Solution**: Added `fe80::1 git.manticorum.com` to Pi-hole custom.list
|
||||
- **Result**: Consistent successful connections, always routes to internal NPM
|
||||
|
||||
### iOS DNS Bypass Issues (Encrypted DNS)
|
||||
**Symptoms**: iOS device gets 403 errors when accessing internal services, NPM logs show external public IP as source instead of local 10.x.x.x IP, even with correct Pi-hole DNS configuration
|
||||
|
||||
**Root Cause**: iOS devices can use encrypted DNS (DNS-over-HTTPS or DNS-over-TLS) that bypasses traditional DNS servers, even when correctly configured. This causes the device to resolve to public/Cloudflare IPs instead of local overrides, routing traffic through the public internet and triggering ACL denials.
|
||||
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check NPM access logs for the service
|
||||
ssh 10.10.0.16 "docker exec nginx-proxy-manager_app_1 tail -50 /data/logs/proxy-host-*_access.log | grep 403"
|
||||
|
||||
# Look for external IPs in logs instead of local 10.x.x.x:
|
||||
# BAD: [Client 73.36.102.55] - - 403 (external IP, blocked by ACL)
|
||||
# GOOD: [Client 10.0.0.207] - 200 200 (local IP, allowed)
|
||||
|
||||
# Verify iOS device is on local network
|
||||
# On iOS: Settings → Wi-Fi → (i) → IP Address
|
||||
# Should show 10.0.0.x or 10.10.0.x
|
||||
|
||||
# Verify Pi-hole DNS is configured
|
||||
# On iOS: Settings → Wi-Fi → (i) → DNS
|
||||
# Should show 10.10.0.16
|
||||
|
||||
# Test if DNS is actually being used
|
||||
nslookup domain.example.com 10.10.0.16 # Shows what Pi-hole returns
|
||||
# Then check what iOS actually resolves (if possible via network sniffer)
|
||||
```
|
||||
|
||||
**Example Problem**:
|
||||
```bash
|
||||
# iOS device configuration:
|
||||
IP Address: 10.0.0.207 (correct, on local network)
|
||||
DNS: 10.10.0.16 (correct, Pi-hole configured)
|
||||
Cellular Data: OFF
|
||||
|
||||
# But NPM logs show:
|
||||
[Client 73.36.102.55] - - 403 # Coming from ISP public IP!
|
||||
|
||||
# Why: iOS is using encrypted DNS, bypassing Pi-hole
|
||||
# Result: Resolves to Cloudflare IP, routes through public internet,
|
||||
# NPM sees external IP, ACL blocks with 403
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
|
||||
**Option 1: Add Public IP to NPM Access Rules** (Quickest, recommended for mobile devices)
|
||||
```bash
|
||||
# Find which config file contains your domain
|
||||
ssh 10.10.0.16 "docker exec nginx-proxy-manager_app_1 sh -c 'grep -l domain.example.com /data/nginx/proxy_host/*.conf'"
|
||||
# Example output: /data/nginx/proxy_host/19.conf
|
||||
|
||||
# Add public IP to access rules (replace YOUR_PUBLIC_IP and config number)
|
||||
ssh 10.10.0.16 "docker exec nginx-proxy-manager_app_1 sed -i '/allow 10.10.0.0\/24;/a \ \n allow YOUR_PUBLIC_IP;' /data/nginx/proxy_host/19.conf"
|
||||
|
||||
# Verify the change
|
||||
ssh 10.10.0.16 "docker exec nginx-proxy-manager_app_1 cat /data/nginx/proxy_host/19.conf" | grep -A 8 "Access Rules"
|
||||
|
||||
# Test and reload nginx
|
||||
ssh 10.10.0.16 "docker exec nginx-proxy-manager_app_1 nginx -t"
|
||||
ssh 10.10.0.16 "docker exec nginx-proxy-manager_app_1 nginx -s reload"
|
||||
```
|
||||
|
||||
**Option 2: Reset iOS Network Settings** (Nuclear option, clears DNS cache/profiles)
|
||||
```
|
||||
iOS: Settings → General → Transfer or Reset iPhone → Reset → Reset Network Settings
|
||||
WARNING: This removes all saved WiFi passwords and network configurations
|
||||
```
|
||||
|
||||
**Option 3: Check for DNS Configuration Profiles**
|
||||
```
|
||||
iOS: Settings → General → VPN & Device Management
|
||||
- Look for any DNS or Configuration Profiles
|
||||
- Remove any third-party DNS profiles (AdGuard, NextDNS, etc.)
|
||||
```
|
||||
|
||||
**Option 4: Disable Private Relay and IP Tracking** (Usually already tried)
|
||||
```
|
||||
iOS: Settings → [Your Name] → iCloud → Private Relay → OFF
|
||||
iOS: Settings → Wi-Fi → (i) → Limit IP Address Tracking → OFF
|
||||
```
|
||||
|
||||
**Option 5: Check Browser DNS Settings** (If using Brave or Firefox)
|
||||
```
|
||||
Brave: Settings → Brave Shields & Privacy → Use secure DNS → OFF
|
||||
Firefox: Settings → DNS over HTTPS → OFF
|
||||
```
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# After applying fix, check NPM logs while accessing from iOS
|
||||
ssh 10.10.0.16 "docker exec nginx-proxy-manager_app_1 tail -f /data/logs/proxy-host-*_access.log"
|
||||
|
||||
# With Option 1 (added public IP): Should see 200 status with external IP
|
||||
# With Option 2-5 (fixed DNS): Should see 200 status with local 10.x.x.x IP
|
||||
```
|
||||
|
||||
**Important Notes**:
|
||||
- **Option 1 is recommended for mobile devices** as iOS encrypted DNS behavior is inconsistent
|
||||
- Public IP workaround requires updating if ISP changes your IP (rare for residential)
|
||||
- Manual nginx config changes (Option 1) will be **overwritten if you edit the proxy host in NPM UI**
|
||||
- To make permanent, either use NPM UI to add the IP, or re-apply after UI changes
|
||||
- This issue can affect any iOS device (iPhone, iPad) and some Android devices with encrypted DNS
|
||||
|
||||
**Real-World Example**: git.manticorum.com iOS Access
|
||||
- **Problem**: iPhone showing 403 errors, desktop working fine on same network
|
||||
- **iOS Config**: IP 10.0.0.207, DNS 10.10.0.16, Cellular OFF (all correct)
|
||||
- **NPM Logs**: iPhone requests showing as [Client 73.36.102.55] (ISP public IP)
|
||||
- **Diagnosis**: iOS using encrypted DNS, bypassing Pi-hole, routing through Cloudflare
|
||||
- **Solution**: Added `allow 73.36.102.55;` to NPM proxy_host/19.conf ACL rules
|
||||
- **Result**: Immediate access, user able to log in to Gitea successfully
|
||||
|
||||
## Network Storage Issues
|
||||
|
||||
### CIFS/SMB Mount Problems
|
||||
@ -277,6 +575,358 @@ sudo nano /etc/ssh/sshd_config
|
||||
sudo systemctl restart sshd
|
||||
```
|
||||
|
||||
## Pi-hole High Availability Troubleshooting
|
||||
|
||||
### Pi-hole Not Responding to DNS Queries
|
||||
**Symptoms**: DNS resolution failures, clients cannot resolve domains, Pi-hole web UI inaccessible
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Test DNS response from both Pi-holes
|
||||
dig @10.10.0.16 google.com
|
||||
dig @10.10.0.226 google.com
|
||||
|
||||
# Check Pi-hole container status
|
||||
ssh npm-pihole "docker ps | grep pihole"
|
||||
ssh ubuntu-manticore "docker ps | grep pihole"
|
||||
|
||||
# Check Pi-hole logs
|
||||
ssh npm-pihole "docker logs pihole --tail 50"
|
||||
ssh ubuntu-manticore "docker logs pihole --tail 50"
|
||||
|
||||
# Test port 53 is listening
|
||||
ssh ubuntu-manticore "netstat -tulpn | grep :53"
|
||||
ssh ubuntu-manticore "ss -tulpn | grep :53"
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Restart Pi-hole containers
|
||||
ssh npm-pihole "docker restart pihole"
|
||||
ssh ubuntu-manticore "cd ~/docker/pihole && docker compose restart"
|
||||
|
||||
# Check for port conflicts
|
||||
ssh ubuntu-manticore "lsof -i :53"
|
||||
|
||||
# If systemd-resolved is conflicting, disable it
|
||||
ssh ubuntu-manticore "sudo systemctl stop systemd-resolved"
|
||||
ssh ubuntu-manticore "sudo systemctl disable systemd-resolved"
|
||||
|
||||
# Rebuild Pi-hole container
|
||||
ssh ubuntu-manticore "cd ~/docker/pihole && docker compose down && docker compose up -d"
|
||||
```
|
||||
|
||||
### DNS Failover Not Working
|
||||
**Symptoms**: DNS stops working when primary Pi-hole fails, clients not using secondary DNS
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check UniFi DHCP DNS configuration
|
||||
# Via UniFi UI: Settings → Networks → LAN → DHCP
|
||||
# DNS Server 1: 10.10.0.16
|
||||
# DNS Server 2: 10.10.0.226
|
||||
|
||||
# Check client DNS configuration
|
||||
# Windows:
|
||||
ipconfig /all | findstr /i "DNS"
|
||||
|
||||
# Linux/macOS:
|
||||
cat /etc/resolv.conf
|
||||
|
||||
# Check if secondary Pi-hole is reachable
|
||||
ping -c 4 10.10.0.226
|
||||
dig @10.10.0.226 google.com
|
||||
|
||||
# Test failover manually
|
||||
ssh npm-pihole "docker stop pihole"
|
||||
dig google.com # Should still work via secondary
|
||||
ssh npm-pihole "docker start pihole"
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Force DHCP lease renewal to get updated DNS servers
|
||||
# Windows:
|
||||
ipconfig /release && ipconfig /renew
|
||||
|
||||
# Linux:
|
||||
sudo dhclient -r && sudo dhclient
|
||||
|
||||
# macOS/iOS:
|
||||
# Disconnect and reconnect to WiFi
|
||||
|
||||
# Verify UniFi DHCP settings are correct
|
||||
# Both DNS servers must be configured in UniFi controller
|
||||
|
||||
# Check client respects both DNS servers
|
||||
# Some clients may cache failed DNS responses
|
||||
# Flush DNS cache:
|
||||
# Windows: ipconfig /flushdns
|
||||
# macOS: sudo dscacheutil -flushcache
|
||||
# Linux: sudo systemd-resolve --flush-caches
|
||||
```
|
||||
|
||||
### Orbital Sync Not Syncing
|
||||
**Symptoms**: Blocklists/whitelists differ between Pi-holes, custom DNS entries missing on secondary
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check Orbital Sync container status
|
||||
ssh ubuntu-manticore "docker ps | grep orbital-sync"
|
||||
|
||||
# Check Orbital Sync logs
|
||||
ssh ubuntu-manticore "docker logs orbital-sync --tail 100"
|
||||
|
||||
# Look for sync errors in logs
|
||||
ssh ubuntu-manticore "docker logs orbital-sync 2>&1 | grep -i error"
|
||||
|
||||
# Verify API tokens are correct
|
||||
ssh ubuntu-manticore "cat ~/docker/orbital-sync/.env"
|
||||
|
||||
# Test API access manually
|
||||
ssh npm-pihole "docker exec pihole pihole -a -p" # Get API token
|
||||
curl -H "Authorization: Token YOUR_TOKEN" http://10.10.0.16/admin/api.php?status
|
||||
|
||||
# Compare blocklist counts between Pi-holes
|
||||
ssh npm-pihole "docker exec pihole pihole -g -l"
|
||||
ssh ubuntu-manticore "docker exec pihole pihole -g -l"
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Regenerate API tokens
|
||||
# Primary Pi-hole: http://10.10.0.16/admin → Settings → API → Generate New Token
|
||||
# Secondary Pi-hole: http://10.10.0.226:8053/admin → Settings → API → Generate New Token
|
||||
|
||||
# Update Orbital Sync .env file
|
||||
ssh ubuntu-manticore "nano ~/docker/orbital-sync/.env"
|
||||
# Update PRIMARY_HOST_PASSWORD and SECONDARY_HOST_PASSWORD
|
||||
|
||||
# Restart Orbital Sync
|
||||
ssh ubuntu-manticore "cd ~/docker/orbital-sync && docker compose restart"
|
||||
|
||||
# Force immediate sync by restarting
|
||||
ssh ubuntu-manticore "cd ~/docker/orbital-sync && docker compose down && docker compose up -d"
|
||||
|
||||
# Monitor sync in real-time
|
||||
ssh ubuntu-manticore "docker logs orbital-sync -f"
|
||||
|
||||
# If all else fails, manually sync via Teleporter
|
||||
# Primary: Settings → Teleporter → Backup
|
||||
# Secondary: Settings → Teleporter → Restore (upload backup file)
|
||||
```
|
||||
|
||||
### NPM DNS Sync Failing
|
||||
**Symptoms**: NPM proxy hosts missing from Pi-hole custom.list, new domains not resolving
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check NPM sync script status
|
||||
ssh npm-pihole "cat /var/log/cron.log | grep npm-pihole-sync"
|
||||
|
||||
# Run sync script manually to see errors
|
||||
ssh npm-pihole "/home/cal/scripts/npm-pihole-sync.sh"
|
||||
|
||||
# Check script can access both Pi-holes
|
||||
ssh npm-pihole "docker exec pihole cat /etc/pihole/custom.list | grep git.manticorum.com"
|
||||
ssh npm-pihole "ssh ubuntu-manticore 'docker exec pihole cat /etc/pihole/custom.list | grep git.manticorum.com'"
|
||||
|
||||
# Verify SSH connectivity to ubuntu-manticore
|
||||
ssh npm-pihole "ssh ubuntu-manticore 'echo SSH OK'"
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Fix SSH key authentication (if needed)
|
||||
ssh npm-pihole "ssh-copy-id ubuntu-manticore"
|
||||
|
||||
# Test script with dry-run
|
||||
ssh npm-pihole "/home/cal/scripts/npm-pihole-sync.sh --dry-run"
|
||||
|
||||
# Run script manually to sync immediately
|
||||
ssh npm-pihole "/home/cal/scripts/npm-pihole-sync.sh"
|
||||
|
||||
# Verify cron job is configured
|
||||
ssh npm-pihole "crontab -l | grep npm-pihole-sync"
|
||||
|
||||
# If cron job missing, add it
|
||||
ssh npm-pihole "crontab -e"
|
||||
# Add: 0 * * * * /home/cal/scripts/npm-pihole-sync.sh >> /var/log/npm-pihole-sync.log 2>&1
|
||||
|
||||
# Check script logs
|
||||
ssh npm-pihole "tail -50 /var/log/npm-pihole-sync.log"
|
||||
```
|
||||
|
||||
### Secondary Pi-hole Performance Issues
|
||||
**Symptoms**: ubuntu-manticore slow, high CPU/RAM usage, Pi-hole affecting Jellyfin/Tdarr
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check resource usage
|
||||
ssh ubuntu-manticore "docker stats --no-stream"
|
||||
|
||||
# Pi-hole should use <1% CPU and ~150MB RAM
|
||||
# If higher, investigate:
|
||||
ssh ubuntu-manticore "docker logs pihole --tail 100"
|
||||
|
||||
# Check for excessive queries
|
||||
ssh ubuntu-manticore "docker exec pihole pihole -c -e"
|
||||
|
||||
# Check for DNS loops or misconfiguration
|
||||
ssh ubuntu-manticore "docker exec pihole pihole -t" # Tail pihole.log
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Restart Pi-hole if resource usage is high
|
||||
ssh ubuntu-manticore "docker restart pihole"
|
||||
|
||||
# Check for DNS query loops
|
||||
# Look for same domain being queried repeatedly
|
||||
ssh ubuntu-manticore "docker exec pihole pihole -t | grep -A 5 'query\[A\]'"
|
||||
|
||||
# Adjust Pi-hole cache settings if needed
|
||||
ssh ubuntu-manticore "docker exec pihole bash -c 'echo \"cache-size=10000\" >> /etc/dnsmasq.d/99-custom.conf'"
|
||||
ssh ubuntu-manticore "docker restart pihole"
|
||||
|
||||
# If Jellyfin/Tdarr are affected, verify Pi-hole is using minimal resources
|
||||
# Resource limits can be added to docker-compose.yml:
|
||||
ssh ubuntu-manticore "nano ~/docker/pihole/docker-compose.yml"
|
||||
# Add under pihole service:
|
||||
# deploy:
|
||||
# resources:
|
||||
# limits:
|
||||
# cpus: '0.5'
|
||||
# memory: 256M
|
||||
```
|
||||
|
||||
### iOS Devices Still Getting 403 Errors (Post-HA Deployment)
|
||||
**Symptoms**: After deploying dual Pi-hole setup, iOS devices still bypass DNS and get 403 errors on internal services
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Verify UniFi DHCP has BOTH Pi-holes configured, NO public DNS
|
||||
# UniFi UI: Settings → Networks → LAN → DHCP → Name Server
|
||||
# DNS1: 10.10.0.16
|
||||
# DNS2: 10.10.0.226
|
||||
# Public DNS (1.1.1.1, 8.8.8.8): REMOVED
|
||||
|
||||
# Check iOS DNS settings
|
||||
# iOS: Settings → WiFi → (i) → DNS
|
||||
# Should show: 10.10.0.16
|
||||
|
||||
# Force iOS DHCP renewal
|
||||
# iOS: Settings → WiFi → Forget Network → Reconnect
|
||||
|
||||
# Check NPM logs for request source
|
||||
ssh npm-pihole "docker exec nginx-proxy-manager_app_1 tail -50 /data/logs/proxy-host-*_access.log | grep 403"
|
||||
|
||||
# Verify both Pi-holes have custom DNS entries
|
||||
ssh npm-pihole "docker exec pihole cat /etc/pihole/custom.list | grep git.manticorum.com"
|
||||
ssh ubuntu-manticore "docker exec pihole cat /etc/pihole/custom.list | grep git.manticorum.com"
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Solution 1: Verify public DNS is removed from UniFi DHCP
|
||||
# If public DNS (1.1.1.1) is still configured, iOS will prefer it
|
||||
# Remove ALL public DNS servers from UniFi DHCP configuration
|
||||
|
||||
# Solution 2: Force iOS to renew DHCP lease
|
||||
# iOS: Settings → WiFi → Forget Network
|
||||
# Then reconnect to WiFi
|
||||
# This forces device to get new DNS servers from DHCP
|
||||
|
||||
# Solution 3: Disable iOS encrypted DNS if still active
|
||||
# iOS: Settings → [Your Name] → iCloud → Private Relay → OFF
|
||||
# iOS: Check for DNS profiles: Settings → General → VPN & Device Management
|
||||
|
||||
# Solution 4: If encrypted DNS persists, add public IP to NPM ACL (fallback)
|
||||
# See "iOS DNS Bypass Issues" section above for detailed steps
|
||||
|
||||
# Solution 5: Test with different iOS device to isolate issue
|
||||
# If other iOS devices work, issue is device-specific configuration
|
||||
|
||||
# Verification after fix
|
||||
ssh npm-pihole "docker exec nginx-proxy-manager_app_1 tail -f /data/logs/proxy-host-*_access.log"
|
||||
# Access git.manticorum.com from iOS
|
||||
# Should see: [Client 10.0.0.x] - - 200 (local IP)
|
||||
```
|
||||
|
||||
### Both Pi-holes Failing Simultaneously
|
||||
**Symptoms**: Complete DNS failure across network, all devices cannot resolve domains
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check both Pi-hole containers
|
||||
ssh npm-pihole "docker ps -a | grep pihole"
|
||||
ssh ubuntu-manticore "docker ps -a | grep pihole"
|
||||
|
||||
# Check both hosts are reachable
|
||||
ping -c 4 10.10.0.16
|
||||
ping -c 4 10.10.0.226
|
||||
|
||||
# Check Docker daemon on both hosts
|
||||
ssh npm-pihole "systemctl status docker"
|
||||
ssh ubuntu-manticore "systemctl status docker"
|
||||
|
||||
# Test emergency DNS (bypassing Pi-hole)
|
||||
dig @8.8.8.8 google.com
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Emergency: Temporarily use public DNS
|
||||
# UniFi UI: Settings → Networks → LAN → DHCP → Name Server
|
||||
# DNS1: 8.8.8.8 (Google DNS - temporary)
|
||||
# DNS2: 1.1.1.1 (Cloudflare - temporary)
|
||||
|
||||
# Restart both Pi-holes
|
||||
ssh npm-pihole "docker restart pihole"
|
||||
ssh ubuntu-manticore "docker restart pihole"
|
||||
|
||||
# If Docker daemon issues:
|
||||
ssh npm-pihole "sudo systemctl restart docker"
|
||||
ssh ubuntu-manticore "sudo systemctl restart docker"
|
||||
|
||||
# Rebuild both Pi-holes if corruption suspected
|
||||
ssh npm-pihole "cd ~/pihole && docker compose down && docker compose up -d"
|
||||
ssh ubuntu-manticore "cd ~/docker/pihole && docker compose down && docker compose up -d"
|
||||
|
||||
# After Pi-holes are restored, revert UniFi DHCP to Pi-holes
|
||||
# UniFi UI: Settings → Networks → LAN → DHCP → Name Server
|
||||
# DNS1: 10.10.0.16
|
||||
# DNS2: 10.10.0.226
|
||||
```
|
||||
|
||||
### Query Load Not Balanced Between Pi-holes
|
||||
**Symptoms**: Primary Pi-hole getting most queries, secondary rarely used
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check query counts on both Pi-holes
|
||||
# Primary: http://10.10.0.16/admin → Dashboard → Total Queries
|
||||
# Secondary: http://10.10.0.226:8053/admin → Dashboard → Total Queries
|
||||
|
||||
# This is NORMAL behavior - clients prefer DNS1 by default
|
||||
# Secondary is for failover, not load balancing
|
||||
|
||||
# To verify failover works:
|
||||
ssh npm-pihole "docker stop pihole"
|
||||
# Wait 30 seconds
|
||||
# Check secondary query count - should increase
|
||||
ssh npm-pihole "docker start pihole"
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# No action needed - this is expected behavior
|
||||
# DNS failover is for redundancy, not load distribution
|
||||
|
||||
# If you want true load balancing (advanced):
|
||||
# Option 1: Configure some devices to prefer DNS2
|
||||
# Manually set DNS on specific devices to 10.10.0.226, 10.10.0.16
|
||||
|
||||
# Option 2: Implement DNS round-robin (requires custom DHCP)
|
||||
# Not recommended for homelab - adds complexity
|
||||
|
||||
# Option 3: Accept default behavior (recommended)
|
||||
# Primary handles most traffic, secondary provides failover
|
||||
# This is industry standard DNS HA behavior
|
||||
```
|
||||
|
||||
## Service Discovery and DNS Issues
|
||||
|
||||
### Local DNS Problems
|
||||
|
||||
@ -1,12 +1,14 @@
|
||||
# Nginx Proxy Manager + Pi-hole Setup
|
||||
|
||||
**Host**: 10.10.0.16
|
||||
**Services**: Nginx Proxy Manager, Pi-hole DNS
|
||||
**Primary Host**: 10.10.0.16 (npm-pihole)
|
||||
**Secondary Host**: 10.10.0.226 (ubuntu-manticore)
|
||||
**Services**: Nginx Proxy Manager, Dual Pi-hole DNS (HA)
|
||||
|
||||
This host runs both NPM (reverse proxy) and Pi-hole (DNS server) as Docker containers.
|
||||
This deployment uses dual Pi-hole instances across separate physical hosts for high availability DNS, with NPM on the primary host handling reverse proxy duties.
|
||||
|
||||
## Quick Info
|
||||
|
||||
### Primary Host (npm-pihole)
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **IP** | 10.10.0.16 |
|
||||
@ -14,6 +16,17 @@ This host runs both NPM (reverse proxy) and Pi-hole (DNS server) as Docker conta
|
||||
| **NPM Container** | nginx-proxy-manager_app_1 |
|
||||
| **Pi-hole Container** | pihole |
|
||||
| **Pi-hole Web** | http://10.10.0.16/admin |
|
||||
| **Role** | Primary DNS + Reverse Proxy |
|
||||
|
||||
### Secondary Host (ubuntu-manticore)
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **IP** | 10.10.0.226 |
|
||||
| **User** | cal |
|
||||
| **Pi-hole Container** | pihole |
|
||||
| **Orbital Sync Container** | orbital-sync |
|
||||
| **Pi-hole Web** | http://10.10.0.226:8053/admin |
|
||||
| **Role** | Secondary DNS (failover) |
|
||||
|
||||
## Services
|
||||
|
||||
@ -34,6 +47,38 @@ This host runs both NPM (reverse proxy) and Pi-hole (DNS server) as Docker conta
|
||||
|
||||
## Architecture
|
||||
|
||||
### High Availability DNS Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ UniFi DHCP Server │
|
||||
│ DNS1: 10.10.0.16 DNS2: 10.10.0.226 │
|
||||
└────────────┬────────────────────────────┬───────────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌────────────────┐ ┌────────────────┐
|
||||
│ npm-pihole │ │ ubuntu- │
|
||||
│ 10.10.0.16 │◄────────►│ manticore │
|
||||
│ │ Orbital │ 10.10.0.226 │
|
||||
│ - NPM │ Sync │ │
|
||||
│ - Pi-hole 1 │ │ - Jellyfin │
|
||||
│ (Primary) │ │ - Tdarr │
|
||||
└────────────────┘ │ - Pi-hole 2 │
|
||||
▲ │ (Secondary) │
|
||||
│ └────────────────┘
|
||||
│
|
||||
┌────────────────┐
|
||||
│ NPM DNS Sync │
|
||||
│ (hourly cron) │
|
||||
│ │
|
||||
│ Syncs proxy │
|
||||
│ hosts to both │
|
||||
│ Pi-holes │
|
||||
└────────────────┘
|
||||
```
|
||||
|
||||
### Request Flow
|
||||
|
||||
```
|
||||
┌─────────────┐
|
||||
│ Clients │
|
||||
@ -41,10 +86,11 @@ This host runs both NPM (reverse proxy) and Pi-hole (DNS server) as Docker conta
|
||||
└──────┬──────┘
|
||||
│ DNS Query: termix.manticorum.com?
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Pi-hole (10.10.0.16)│
|
||||
┌──────────────────────┐
|
||||
│ Pi-hole (Primary) │ ◄── Clients prefer DNS1
|
||||
│ 10.10.0.16 │ ◄── Failover to DNS2 (10.10.0.226) if down
|
||||
│ Returns: 10.10.0.16 │ ← Local DNS override
|
||||
└──────┬──────────────┘
|
||||
└──────┬───────────────┘
|
||||
│ HTTPS Request
|
||||
▼
|
||||
┌──────────────────────────┐
|
||||
@ -62,15 +108,24 @@ This host runs both NPM (reverse proxy) and Pi-hole (DNS server) as Docker conta
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## DNS Sync Automation
|
||||
**Benefits**:
|
||||
- ✅ True high availability: DNS survives single host failure
|
||||
- ✅ No public DNS fallback: All devices consistently use Pi-hole
|
||||
- ✅ Automatic synchronization: Blocklists and custom DNS entries sync every 5 minutes
|
||||
- ✅ Physical separation: Primary (LXC) and secondary (physical server) on different hosts
|
||||
- ✅ Fixes iOS 403 errors: No more encrypted DNS bypass to public IPs
|
||||
|
||||
### Problem Solved
|
||||
## DNS Synchronization
|
||||
|
||||
### NPM → Pi-hole DNS Sync
|
||||
|
||||
#### Problem Solved
|
||||
When accessing homelab services by domain name (e.g., `termix.manticorum.com`), DNS would resolve to Cloudflare IPs, causing traffic to hairpin through the internet. This broke IP-based access lists since NPM would see your public IP instead of your internal IP.
|
||||
|
||||
### Solution
|
||||
Automatically sync all NPM proxy hosts to Pi-hole's local DNS, pointing them to NPM's internal IP (10.10.0.16).
|
||||
#### Solution
|
||||
Automatically sync all NPM proxy hosts to **both Pi-holes'** local DNS, pointing them to NPM's internal IP (10.10.0.16).
|
||||
|
||||
### Sync Script
|
||||
#### Sync Script
|
||||
|
||||
**Location**: `/home/cal/scripts/npm-pihole-sync.sh` (on 10.10.0.16)
|
||||
**Repository**: `server-configs/networking/scripts/npm-pihole-sync.sh`
|
||||
@ -79,14 +134,16 @@ The script:
|
||||
1. Queries NPM's SQLite database for all enabled proxy hosts
|
||||
2. Extracts domain names
|
||||
3. Creates Pi-hole custom DNS entries pointing to NPM (10.10.0.16)
|
||||
4. Reloads Pi-hole DNS
|
||||
4. Syncs to **primary Pi-hole** (local Docker container)
|
||||
5. Syncs to **secondary Pi-hole** (via SSH to ubuntu-manticore)
|
||||
6. Reloads DNS on both Pi-holes
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Dry run (preview changes)
|
||||
ssh cal@10.10.0.16 /home/cal/scripts/npm-pihole-sync.sh --dry-run
|
||||
|
||||
# Apply changes
|
||||
# Apply changes to both Pi-holes
|
||||
ssh cal@10.10.0.16 /home/cal/scripts/npm-pihole-sync.sh
|
||||
```
|
||||
|
||||
@ -102,13 +159,68 @@ scp server-configs/networking/scripts/npm-pihole-sync.sh cal@10.10.0.16:/home/ca
|
||||
# Make executable
|
||||
ssh cal@10.10.0.16 "chmod +x /home/cal/scripts/npm-pihole-sync.sh"
|
||||
|
||||
# Test
|
||||
# Test (shows what would be synced to both Pi-holes)
|
||||
ssh cal@10.10.0.16 "/home/cal/scripts/npm-pihole-sync.sh --dry-run"
|
||||
|
||||
# Apply
|
||||
ssh cal@10.10.0.16 "/home/cal/scripts/npm-pihole-sync.sh"
|
||||
|
||||
# Verify both Pi-holes have the entries
|
||||
ssh cal@10.10.0.16 "docker exec pihole cat /etc/pihole/custom.list | grep termix"
|
||||
ssh ubuntu-manticore "docker exec pihole cat /etc/pihole/custom.list | grep termix"
|
||||
```
|
||||
|
||||
### Pi-hole → Pi-hole Sync (Orbital Sync)
|
||||
|
||||
#### Purpose
|
||||
Synchronizes blocklists, whitelists, custom DNS entries, and all Pi-hole configuration from primary to secondary Pi-hole for consistent ad blocking and DNS behavior.
|
||||
|
||||
#### Technology
|
||||
- **Orbital Sync**: Modern replacement for deprecated Gravity Sync
|
||||
- **Method**: Uses Pi-hole's official Teleporter API (backup/restore)
|
||||
- **Interval**: Every 5 minutes
|
||||
- **Location**: ubuntu-manticore (co-located with secondary Pi-hole)
|
||||
|
||||
#### What Gets Synced
|
||||
- ✅ Blocklists (adlists)
|
||||
- ✅ Whitelists
|
||||
- ✅ Regex blacklists/whitelists
|
||||
- ✅ Custom DNS entries (local DNS records)
|
||||
- ✅ CNAME records
|
||||
- ✅ Client groups
|
||||
- ✅ Audit log
|
||||
- ❌ DHCP leases (not using Pi-hole for DHCP)
|
||||
|
||||
#### Management
|
||||
```bash
|
||||
# Check Orbital Sync status
|
||||
ssh ubuntu-manticore "docker ps | grep orbital-sync"
|
||||
|
||||
# View sync logs
|
||||
ssh ubuntu-manticore "docker logs orbital-sync --tail 50"
|
||||
|
||||
# Force immediate sync (restart container)
|
||||
ssh ubuntu-manticore "cd ~/docker/orbital-sync && docker compose restart"
|
||||
|
||||
# Monitor sync in real-time
|
||||
ssh ubuntu-manticore "docker logs orbital-sync -f"
|
||||
```
|
||||
|
||||
#### Configuration
|
||||
**Location**: `~/docker/orbital-sync/.env` on ubuntu-manticore
|
||||
|
||||
```bash
|
||||
PRIMARY_HOST_PASSWORD=<api_token_from_primary_pihole>
|
||||
SECONDARY_HOST_PASSWORD=<api_token_from_secondary_pihole>
|
||||
```
|
||||
|
||||
**To regenerate app passwords (Pi-hole v6):**
|
||||
1. Primary: http://10.10.0.16:81/admin → Settings → Web Interface / API → Configure app password
|
||||
2. Secondary: http://10.10.0.226:8053/admin → Settings → Web Interface / API → Configure app password
|
||||
3. Update `~/.claude/secrets/pihole1_app_password` and `pihole2_app_password`
|
||||
4. Update `.env` file on ubuntu-manticore
|
||||
5. Restart orbital-sync: `cd ~/docker/orbital-sync && docker compose restart`
|
||||
|
||||
## Cloudflare Real IP Configuration
|
||||
|
||||
To support external access through Cloudflare while maintaining proper IP detection for access lists, NPM is configured to trust Cloudflare's IP ranges and read the real client IP from the `CF-Connecting-IP` header.
|
||||
|
||||
@ -1,7 +1,12 @@
|
||||
#!/bin/bash
|
||||
# NPM to Pi-hole DNS Sync
|
||||
# NPM to Pi-hole DNS Sync (Pi-hole v6 compatible)
|
||||
# Syncs Nginx Proxy Manager proxy hosts to Pi-hole local DNS
|
||||
# All domains point to NPM's IP, not the forward destination
|
||||
# Supports dual Pi-hole deployment for high availability
|
||||
#
|
||||
# Pi-hole v6 changes:
|
||||
# - Updates /etc/pihole/pihole.toml (dns.hosts array)
|
||||
# - Updates /etc/pihole/hosts/custom.list (traditional format)
|
||||
|
||||
set -e
|
||||
|
||||
@ -13,7 +18,11 @@ fi
|
||||
# NPM's IP address (where all domains should point)
|
||||
NPM_IP="10.10.0.16"
|
||||
|
||||
echo "NPM → Pi-hole DNS Sync"
|
||||
# Pi-hole instances
|
||||
PRIMARY_PIHOLE="10.10.0.16"
|
||||
SECONDARY_PIHOLE="10.10.0.226"
|
||||
|
||||
echo "NPM → Pi-hole DNS Sync (v6 compatible)"
|
||||
echo "============================================================"
|
||||
|
||||
# Query NPM database for all enabled proxy hosts
|
||||
@ -51,7 +60,7 @@ if [ "$DRY_RUN" = true ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Build new custom.list
|
||||
# Build new custom.list (traditional format)
|
||||
NEW_DNS="# Pi-hole Local DNS Records
|
||||
# Auto-synced from Nginx Proxy Manager
|
||||
# All domains point to NPM at $NPM_IP
|
||||
@ -62,13 +71,93 @@ while IFS= read -r domain; do
|
||||
NEW_DNS+="$NPM_IP $domain"$'\n'
|
||||
done <<< "$DOMAINS"
|
||||
|
||||
# Write to Pi-hole
|
||||
echo "$NEW_DNS" | docker exec -i pihole tee /etc/pihole/custom.list > /dev/null
|
||||
|
||||
# Reload Pi-hole DNS
|
||||
docker exec pihole pihole restartdns reload > /dev/null
|
||||
# Build TOML hosts array
|
||||
TOML_HOSTS=""
|
||||
while IFS= read -r domain; do
|
||||
TOML_HOSTS+=" \"$NPM_IP $domain\","$'\n'
|
||||
done <<< "$DOMAINS"
|
||||
# Remove trailing comma from last entry
|
||||
TOML_HOSTS=$(echo "$TOML_HOSTS" | sed '$ s/,$//')
|
||||
|
||||
echo ""
|
||||
echo "✓ Updated $RECORD_COUNT DNS records in Pi-hole"
|
||||
echo "✓ All domains now point to NPM at $NPM_IP"
|
||||
echo "✓ Reloaded Pi-hole DNS"
|
||||
echo "Syncing to Pi-hole instances..."
|
||||
echo "============================================================"
|
||||
|
||||
# Function to update Pi-hole v6 configuration
|
||||
update_pihole_v6() {
|
||||
local pihole_container="$1"
|
||||
local ssh_prefix="$2" # Empty for local, "ssh user@host" for remote
|
||||
|
||||
# Write to /etc/pihole/hosts/custom.list
|
||||
if [ -z "$ssh_prefix" ]; then
|
||||
echo "$NEW_DNS" | docker exec -i "$pihole_container" tee /etc/pihole/hosts/custom.list > /dev/null 2>&1
|
||||
else
|
||||
$ssh_prefix "echo '${NEW_DNS}' | docker exec -i $pihole_container tee /etc/pihole/hosts/custom.list > /dev/null" 2>&1
|
||||
fi
|
||||
|
||||
# Update pihole.toml using perl for reliable multi-line regex replacement
|
||||
# Escape special characters for perl
|
||||
local toml_hosts_perl=$(echo "$TOML_HOSTS" | sed 's/\\/\\\\/g; s/"/\\"/g')
|
||||
|
||||
local perl_cmd="perl -i.bak -0pe 's/hosts\\s*=\\s*\\[[^\\]]*\\]/hosts = [\\n${toml_hosts_perl}\\n ]/s' /etc/pihole/pihole.toml"
|
||||
|
||||
if [ -z "$ssh_prefix" ]; then
|
||||
docker exec "$pihole_container" sh -c "$perl_cmd"
|
||||
else
|
||||
$ssh_prefix "docker exec $pihole_container sh -c \"$perl_cmd\""
|
||||
fi
|
||||
}
|
||||
|
||||
# Sync to primary Pi-hole (local)
|
||||
echo "Primary Pi-hole ($PRIMARY_PIHOLE):"
|
||||
if update_pihole_v6 "pihole" ""; then
|
||||
if docker exec pihole pihole reloaddns > /dev/null 2>&1; then
|
||||
echo " ✓ Updated $RECORD_COUNT DNS records"
|
||||
echo " ✓ Updated /etc/pihole/hosts/custom.list"
|
||||
echo " ✓ Updated /etc/pihole/pihole.toml"
|
||||
echo " ✓ Reloaded DNS"
|
||||
else
|
||||
echo " ✗ Failed to reload DNS"
|
||||
PRIMARY_SYNC_FAILED=1
|
||||
fi
|
||||
else
|
||||
echo " ✗ Failed to update Pi-hole configuration"
|
||||
PRIMARY_SYNC_FAILED=1
|
||||
fi
|
||||
|
||||
# Sync to secondary Pi-hole (remote via SSH using IP)
|
||||
echo "Secondary Pi-hole ($SECONDARY_PIHOLE):"
|
||||
if update_pihole_v6 "pihole" "ssh -o StrictHostKeyChecking=no cal@$SECONDARY_PIHOLE"; then
|
||||
if ssh -o StrictHostKeyChecking=no "cal@$SECONDARY_PIHOLE" "docker exec pihole pihole reloaddns > /dev/null" 2>&1; then
|
||||
echo " ✓ Updated $RECORD_COUNT DNS records"
|
||||
echo " ✓ Updated /etc/pihole/hosts/custom.list"
|
||||
echo " ✓ Updated /etc/pihole/pihole.toml"
|
||||
echo " ✓ Reloaded DNS"
|
||||
else
|
||||
echo " ✗ Failed to reload DNS"
|
||||
SECONDARY_SYNC_FAILED=1
|
||||
fi
|
||||
else
|
||||
echo " ✗ Failed to update Pi-hole configuration or SSH connection issue"
|
||||
SECONDARY_SYNC_FAILED=1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
if [ -z "$PRIMARY_SYNC_FAILED" ] && [ -z "$SECONDARY_SYNC_FAILED" ]; then
|
||||
echo "✓ Successfully synced to both Pi-hole instances"
|
||||
echo "✓ All $RECORD_COUNT domains now point to NPM at $NPM_IP"
|
||||
echo "✓ Updated both pihole.toml and custom.list files"
|
||||
exit 0
|
||||
elif [ -z "$PRIMARY_SYNC_FAILED" ] && [ -n "$SECONDARY_SYNC_FAILED" ]; then
|
||||
echo "⚠ Primary sync successful, but secondary sync failed"
|
||||
echo " Check SSH connectivity to $SECONDARY_PIHOLE and secondary Pi-hole health"
|
||||
exit 1
|
||||
elif [ -n "$PRIMARY_SYNC_FAILED" ] && [ -z "$SECONDARY_SYNC_FAILED" ]; then
|
||||
echo "⚠ Secondary sync successful, but primary sync failed"
|
||||
echo " Check primary Pi-hole health"
|
||||
exit 1
|
||||
else
|
||||
echo "✗ Both Pi-hole syncs failed"
|
||||
echo " Check Pi-hole containers and SSH connectivity"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
Loading…
Reference in New Issue
Block a user