refactor: convert 5 more skills to commands, update transcriber defaults
Convert backlog, project-plan, save-doc, youtube-transcriber, and z-image from skills/ to commands/ so they appear as user-invocable slash commands with plugin name prefixes. Update youtube-transcriber: switch default model from gpt-4o-transcribe to gpt-4o-mini-transcribe (OpenAI's current recommendation, half cost) and fix cost estimates that were 4-7x too high. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
9a227204fd
commit
51fe634ff5
@ -1,18 +1,9 @@
|
|||||||
---
|
---
|
||||||
name: next
|
|
||||||
description: Check Gitea for open issues and surface the next task to work on
|
description: Check Gitea for open issues and surface the next task to work on
|
||||||
---
|
---
|
||||||
|
|
||||||
# Backlog - Find Next Task
|
# Backlog - Find Next Task
|
||||||
|
|
||||||
## When to Activate This Skill
|
|
||||||
- "/backlog"
|
|
||||||
- "What should I work on?"
|
|
||||||
- "Check for open issues"
|
|
||||||
- "Any tasks to do?"
|
|
||||||
- "What's next?"
|
|
||||||
- "Show me the backlog"
|
|
||||||
|
|
||||||
## Core Workflow
|
## Core Workflow
|
||||||
|
|
||||||
### Step 1: Detect the current repo
|
### Step 1: Detect the current repo
|
||||||
@ -1,5 +1,4 @@
|
|||||||
---
|
---
|
||||||
name: generate
|
|
||||||
description: Analyze codebase and generate comprehensive PROJECT_PLAN.json task files
|
description: Analyze codebase and generate comprehensive PROJECT_PLAN.json task files
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -10,7 +9,7 @@ Creates structured `PROJECT_PLAN.json` files for tracking project work.
|
|||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
```
|
```
|
||||||
/project-plan [type]
|
/project-plan:generate [type]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Types:**
|
**Types:**
|
||||||
@ -175,13 +174,3 @@ Create `PROJECT_PLAN.json` in the project root or relevant subdirectory.
|
|||||||
- Default: `PROJECT_PLAN.json` in project root
|
- Default: `PROJECT_PLAN.json` in project root
|
||||||
- For monorepos: `{subproject}/PROJECT_PLAN.json`
|
- For monorepos: `{subproject}/PROJECT_PLAN.json`
|
||||||
- For focused work: `{directory}/PROJECT_PLAN.json`
|
- For focused work: `{directory}/PROJECT_PLAN.json`
|
||||||
|
|
||||||
## Example Invocations
|
|
||||||
|
|
||||||
```
|
|
||||||
/project-plan # Default refactoring analysis
|
|
||||||
/project-plan refactoring # Technical debt focus
|
|
||||||
/project-plan feature # Feature implementation plan
|
|
||||||
/project-plan audit # Security/a11y audit
|
|
||||||
/project-plan --output=frontend # Save to frontend/PROJECT_PLAN.json
|
|
||||||
```
|
|
||||||
@ -1,8 +1,6 @@
|
|||||||
---
|
---
|
||||||
name: save
|
|
||||||
allowed-tools: Read,Write,Edit,Glob,Grep,Bash
|
|
||||||
description: Save documentation to the knowledge base with proper frontmatter
|
description: Save documentation to the knowledge base with proper frontmatter
|
||||||
user-invocable: true
|
allowed-tools: Read,Write,Edit,Glob,Grep,Bash
|
||||||
---
|
---
|
||||||
|
|
||||||
Save learnings, fixes, release notes, and other documentation to the claude-home knowledge base. Files are auto-committed and pushed by the `sync-kb` systemd timer (every 2 hours), which triggers kb-rag reindexing.
|
Save learnings, fixes, release notes, and other documentation to the claude-home knowledge base. Files are auto-committed and pushed by the `sync-kb` systemd timer (every 2 hours), which triggers kb-rag reindexing.
|
||||||
@ -70,7 +68,7 @@ Save to `/mnt/NV2/Development/claude-home/{domain}/`. The file will be auto-comm
|
|||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
See `examples/` in this skill directory for templates of each document type:
|
See `examples/` in this plugin directory for templates of each document type:
|
||||||
- `examples/troubleshooting.md` — Bug fix / incident resolution
|
- `examples/troubleshooting.md` — Bug fix / incident resolution
|
||||||
- `examples/release-notes.md` — Deployment / release changelog
|
- `examples/release-notes.md` — Deployment / release changelog
|
||||||
- `examples/guide.md` — How-to / setup guide
|
- `examples/guide.md` — How-to / setup guide
|
||||||
164
plugins/youtube-transcriber/commands/transcribe.md
Normal file
164
plugins/youtube-transcriber/commands/transcribe.md
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
---
|
||||||
|
description: Transcribe YouTube videos of any length using GPT-4o-mini-transcribe
|
||||||
|
---
|
||||||
|
|
||||||
|
# YouTube Transcriber - High-Quality Video Transcription
|
||||||
|
|
||||||
|
## Script Location
|
||||||
|
**Primary script**: `$YOUTUBE_TRANSCRIBER_DIR/transcribe.py`
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
- **Parallel processing**: Multiple videos can be transcribed simultaneously
|
||||||
|
- **Unlimited length**: Auto-chunks videos >10 minutes to prevent API limits
|
||||||
|
- **Organized output**:
|
||||||
|
- Transcripts → `output/` directory
|
||||||
|
- Temp files → `temp/` directory (auto-cleaned)
|
||||||
|
- **High quality**: Uses GPT-4o-mini-transcribe by default (OpenAI's recommended model)
|
||||||
|
- **Cost options**: Can use `-m gpt-4o-transcribe` for the full-size model at 2x cost
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
### Single Video
|
||||||
|
```bash
|
||||||
|
cd $YOUTUBE_TRANSCRIBER_DIR
|
||||||
|
uv run python transcribe.py "https://youtube.com/watch?v=VIDEO_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `output/Video_Title_2025-11-10.txt`
|
||||||
|
|
||||||
|
### Multiple Videos in Parallel
|
||||||
|
```bash
|
||||||
|
cd $YOUTUBE_TRANSCRIBER_DIR
|
||||||
|
|
||||||
|
# Launch all in background simultaneously
|
||||||
|
uv run python transcribe.py "URL1" &
|
||||||
|
uv run python transcribe.py "URL2" &
|
||||||
|
uv run python transcribe.py "URL3" &
|
||||||
|
wait
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why parallel works**: Each transcription uses unique temp files (UUID-based) in `temp/` directory.
|
||||||
|
|
||||||
|
### Higher Quality Mode
|
||||||
|
```bash
|
||||||
|
cd $YOUTUBE_TRANSCRIBER_DIR
|
||||||
|
uv run python transcribe.py "URL" -m gpt-4o-transcribe
|
||||||
|
```
|
||||||
|
|
||||||
|
**When to use full model**: Noisy audio, critical accuracy requirements. Costs 2x more ($0.006/min vs $0.003/min).
|
||||||
|
|
||||||
|
## Command Options
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run python transcribe.py [URL] [OPTIONS]
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-o, --output PATH Custom output filename (default: auto-generated in output/)
|
||||||
|
-m, --model MODEL Transcription model (default: gpt-4o-mini-transcribe)
|
||||||
|
Options: gpt-4o-mini-transcribe, gpt-4o-transcribe, whisper-1
|
||||||
|
-p, --prompt TEXT Context prompt for better accuracy
|
||||||
|
--chunk-duration MINUTES Chunk size for long videos (default: 10 minutes)
|
||||||
|
--keep-audio Keep temp audio files (default: auto-delete)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Workflow for User Requests
|
||||||
|
|
||||||
|
### Single Video Request
|
||||||
|
1. Change to transcriber directory
|
||||||
|
2. Run script with URL
|
||||||
|
3. Report output file location in `output/` directory
|
||||||
|
|
||||||
|
### Multiple Video Request
|
||||||
|
1. Change to transcriber directory
|
||||||
|
2. Launch all transcriptions in parallel using background processes
|
||||||
|
3. Wait for all to complete
|
||||||
|
4. Report all output files in `output/` directory
|
||||||
|
|
||||||
|
### Testing/Cost-Conscious Request
|
||||||
|
1. Default model (`gpt-4o-mini-transcribe`) is already the cheapest GPT-4o option
|
||||||
|
2. For even cheaper: suggest Groq's Whisper API as an alternative
|
||||||
|
3. Quality is excellent for most YouTube content
|
||||||
|
|
||||||
|
## Technical Details
|
||||||
|
|
||||||
|
**How it works**:
|
||||||
|
1. Downloads audio from YouTube (via yt-dlp)
|
||||||
|
2. Saves to unique temp file: `temp/download_{UUID}.mp3`
|
||||||
|
3. Splits long videos (>10 min) into chunks automatically
|
||||||
|
4. Transcribes with OpenAI API (GPT-4o-mini-transcribe)
|
||||||
|
5. Saves transcript: `output/Video_Title_YYYY-MM-DD.txt`
|
||||||
|
6. Cleans up temp files automatically
|
||||||
|
|
||||||
|
**Parallel safety**:
|
||||||
|
- Each process uses UUID-based temp files
|
||||||
|
- No file conflicts between parallel processes
|
||||||
|
- Temp files auto-cleaned after completion
|
||||||
|
|
||||||
|
**Auto-chunking**:
|
||||||
|
- Videos >10 minutes: Split into 10-minute chunks
|
||||||
|
- Context preserved between chunks
|
||||||
|
- Prevents API response truncation
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- OpenAI API key: `$OPENAI_API_KEY` environment variable
|
||||||
|
- Python 3.10+ with uv package manager
|
||||||
|
- FFmpeg (for audio processing)
|
||||||
|
- yt-dlp (for YouTube downloads)
|
||||||
|
|
||||||
|
**Check requirements**:
|
||||||
|
```bash
|
||||||
|
echo $OPENAI_API_KEY # Should show API key
|
||||||
|
which ffmpeg # Should show path
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cost Estimates
|
||||||
|
|
||||||
|
Default model (`gpt-4o-mini-transcribe` at $0.003/min):
|
||||||
|
|
||||||
|
- **5-minute video**: ~$0.015
|
||||||
|
- **25-minute video**: ~$0.075
|
||||||
|
- **60-minute video**: ~$0.18
|
||||||
|
|
||||||
|
Full model (`gpt-4o-transcribe` at $0.006/min):
|
||||||
|
|
||||||
|
- **5-minute video**: ~$0.03
|
||||||
|
- **25-minute video**: ~$0.15
|
||||||
|
- **60-minute video**: ~$0.36
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Single video (default quality)
|
||||||
|
uv run python transcribe.py "URL"
|
||||||
|
|
||||||
|
# Single video (higher quality, 2x cost)
|
||||||
|
uv run python transcribe.py "URL" -m gpt-4o-transcribe
|
||||||
|
|
||||||
|
# Multiple videos in parallel
|
||||||
|
for url in URL1 URL2 URL3; do
|
||||||
|
uv run python transcribe.py "$url" &
|
||||||
|
done
|
||||||
|
wait
|
||||||
|
|
||||||
|
# With custom output
|
||||||
|
uv run python transcribe.py "URL" -o custom_name.txt
|
||||||
|
|
||||||
|
# With context prompt
|
||||||
|
uv run python transcribe.py "URL" -p "Context about video content"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration
|
||||||
|
|
||||||
|
**With fabric**: Process transcripts after generation
|
||||||
|
```bash
|
||||||
|
cat output/Video_Title_2025-11-10.txt | fabric -p extract_wisdom
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Script requires being in its directory to work correctly
|
||||||
|
- Always change to `$YOUTUBE_TRANSCRIBER_DIR` first
|
||||||
|
- Parallel execution is safe and recommended for multiple videos
|
||||||
|
- Default model (gpt-4o-mini-transcribe) is recommended for most content
|
||||||
|
- Output files automatically named with video title + date
|
||||||
|
- Temp files automatically cleaned after transcription
|
||||||
@ -1,263 +0,0 @@
|
|||||||
---
|
|
||||||
name: transcribe
|
|
||||||
description: Transcribe YouTube videos of any length using GPT-4o-transcribe
|
|
||||||
---
|
|
||||||
|
|
||||||
# YouTube Transcriber - High-Quality Video Transcription
|
|
||||||
|
|
||||||
## When to Activate This Skill
|
|
||||||
- "Transcribe this YouTube video"
|
|
||||||
- "Get a transcript of [URL]"
|
|
||||||
- "Transcribe these videos" (multiple URLs)
|
|
||||||
- User provides YouTube URL(s) needing transcription
|
|
||||||
- "Extract text from video"
|
|
||||||
- Any request involving YouTube video transcription
|
|
||||||
|
|
||||||
## Script Location
|
|
||||||
**Primary script**: `$YOUTUBE_TRANSCRIBER_DIR/transcribe.py`
|
|
||||||
|
|
||||||
## Key Features
|
|
||||||
- **Parallel processing**: Multiple videos can be transcribed simultaneously
|
|
||||||
- **Unlimited length**: Auto-chunks videos >10 minutes to prevent API limits
|
|
||||||
- **Organized output**:
|
|
||||||
- Transcripts → `output/` directory
|
|
||||||
- Temp files → `temp/` directory (auto-cleaned)
|
|
||||||
- **High quality**: Uses GPT-4o-transcribe by default (reduced hallucinations)
|
|
||||||
- **Cost options**: Can use `-m gpt-4o-mini-transcribe` for 50% cost savings
|
|
||||||
|
|
||||||
## Basic Usage
|
|
||||||
|
|
||||||
### Single Video
|
|
||||||
```bash
|
|
||||||
cd $YOUTUBE_TRANSCRIBER_DIR
|
|
||||||
uv run python transcribe.py "https://youtube.com/watch?v=VIDEO_ID"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `output/Video_Title_2025-11-10.txt`
|
|
||||||
|
|
||||||
### Multiple Videos in Parallel
|
|
||||||
```bash
|
|
||||||
cd $YOUTUBE_TRANSCRIBER_DIR
|
|
||||||
|
|
||||||
# Launch all in background simultaneously
|
|
||||||
uv run python transcribe.py "URL1" &
|
|
||||||
uv run python transcribe.py "URL2" &
|
|
||||||
uv run python transcribe.py "URL3" &
|
|
||||||
wait
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why parallel works**: Each transcription uses unique temp files (UUID-based) in `temp/` directory.
|
|
||||||
|
|
||||||
### Cost-Saving Mode
|
|
||||||
```bash
|
|
||||||
cd $YOUTUBE_TRANSCRIBER_DIR
|
|
||||||
uv run python transcribe.py "URL" -m gpt-4o-mini-transcribe
|
|
||||||
```
|
|
||||||
|
|
||||||
**When to use mini**: Testing, casual content, bulk processing. Quality is the same as gpt-4o-transcribe but ~50% cheaper.
|
|
||||||
|
|
||||||
## Command Options
|
|
||||||
|
|
||||||
```bash
|
|
||||||
uv run python transcribe.py [URL] [OPTIONS]
|
|
||||||
|
|
||||||
Options:
|
|
||||||
-o, --output PATH Custom output filename (default: auto-generated in output/)
|
|
||||||
-m, --model MODEL Transcription model (default: gpt-4o-transcribe)
|
|
||||||
Options: gpt-4o-transcribe, gpt-4o-mini-transcribe, whisper-1
|
|
||||||
-p, --prompt TEXT Context prompt for better accuracy
|
|
||||||
--chunk-duration MINUTES Chunk size for long videos (default: 10 minutes)
|
|
||||||
--keep-audio Keep temp audio files (default: auto-delete)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow for User Requests
|
|
||||||
|
|
||||||
### Single Video Request
|
|
||||||
1. Change to transcriber directory
|
|
||||||
2. Run script with URL
|
|
||||||
3. Report output file location in `output/` directory
|
|
||||||
|
|
||||||
### Multiple Video Request
|
|
||||||
1. Change to transcriber directory
|
|
||||||
2. Launch all transcriptions in parallel using background processes
|
|
||||||
3. Wait for all to complete
|
|
||||||
4. Report all output files in `output/` directory
|
|
||||||
|
|
||||||
### Testing/Cost-Conscious Request
|
|
||||||
1. Always use `-m gpt-4o-mini-transcribe` for testing
|
|
||||||
2. Mention cost savings to user
|
|
||||||
3. Quality is identical to full model
|
|
||||||
|
|
||||||
## Example Responses
|
|
||||||
|
|
||||||
**User**: "Transcribe this video: https://youtube.com/watch?v=abc123"
|
|
||||||
|
|
||||||
**Assistant Action**:
|
|
||||||
```bash
|
|
||||||
cd $YOUTUBE_TRANSCRIBER_DIR
|
|
||||||
uv run python transcribe.py "https://youtube.com/watch?v=abc123"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Report**: "✅ Transcript saved to `output/Video_Title_2025-11-10.txt`"
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**User**: "Transcribe these 5 videos: [URL1] [URL2] [URL3] [URL4] [URL5]"
|
|
||||||
|
|
||||||
**Assistant Action**: Launch all 5 in parallel:
|
|
||||||
```bash
|
|
||||||
cd $YOUTUBE_TRANSCRIBER_DIR
|
|
||||||
uv run python transcribe.py "URL1" &
|
|
||||||
uv run python transcribe.py "URL2" &
|
|
||||||
uv run python transcribe.py "URL3" &
|
|
||||||
uv run python transcribe.py "URL4" &
|
|
||||||
uv run python transcribe.py "URL5" &
|
|
||||||
wait
|
|
||||||
```
|
|
||||||
|
|
||||||
**Report**: "✅ All 5 videos transcribed successfully in parallel. Output files in `output/` directory"
|
|
||||||
|
|
||||||
## Technical Details
|
|
||||||
|
|
||||||
**How it works**:
|
|
||||||
1. Downloads audio from YouTube (via yt-dlp)
|
|
||||||
2. Saves to unique temp file: `temp/download_{UUID}.mp3`
|
|
||||||
3. Splits long videos (>10 min) into chunks automatically
|
|
||||||
4. Transcribes with OpenAI API (GPT-4o-transcribe)
|
|
||||||
5. Saves transcript: `output/Video_Title_YYYY-MM-DD.txt`
|
|
||||||
6. Cleans up temp files automatically
|
|
||||||
|
|
||||||
**Parallel safety**:
|
|
||||||
- Each process uses UUID-based temp files
|
|
||||||
- No file conflicts between parallel processes
|
|
||||||
- Temp files auto-cleaned after completion
|
|
||||||
|
|
||||||
**Auto-chunking**:
|
|
||||||
- Videos >10 minutes: Split into 10-minute chunks
|
|
||||||
- Context preserved between chunks
|
|
||||||
- Prevents API response truncation
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
- OpenAI API key: `$OPENAI_API_KEY` environment variable
|
|
||||||
- Python 3.10+ with uv package manager
|
|
||||||
- FFmpeg (for audio processing)
|
|
||||||
- yt-dlp (for YouTube downloads)
|
|
||||||
|
|
||||||
**Check requirements**:
|
|
||||||
```bash
|
|
||||||
echo $OPENAI_API_KEY # Should show API key
|
|
||||||
which ffmpeg # Should show path
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
Transcripts are saved as plain text with metadata:
|
|
||||||
```
|
|
||||||
================================================================================
|
|
||||||
YouTube Video Transcript (Long Video)
|
|
||||||
================================================================================
|
|
||||||
|
|
||||||
Title: Video Title Here
|
|
||||||
Uploader: Channel Name
|
|
||||||
Duration: 45m 32s
|
|
||||||
URL: https://youtube.com/watch?v=VIDEO_ID
|
|
||||||
|
|
||||||
================================================================================
|
|
||||||
|
|
||||||
[Full transcript text with proper punctuation...]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Always use parallel for multiple videos** - It's 6x faster
|
|
||||||
2. **Use mini model for testing** - Same quality, half the cost
|
|
||||||
3. **Check output/ directory** - All transcripts organized there
|
|
||||||
4. **Temp files auto-clean** - No manual cleanup needed
|
|
||||||
5. **Add context prompts for technical content**:
|
|
||||||
```bash
|
|
||||||
uv run python transcribe.py "URL" \
|
|
||||||
-p "Technical discussion about Docker, Kubernetes, microservices"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
**API Key Missing**:
|
|
||||||
```bash
|
|
||||||
export OPENAI_API_KEY="sk-proj-your-key-here"
|
|
||||||
```
|
|
||||||
|
|
||||||
**FFmpeg Not Found**:
|
|
||||||
```bash
|
|
||||||
sudo dnf install ffmpeg # Fedora/Nobara
|
|
||||||
```
|
|
||||||
|
|
||||||
**Parallel Conflicts** (shouldn't happen with UUID temps):
|
|
||||||
- Each process creates unique temp file in `temp/`
|
|
||||||
- If issues occur, check `temp/` directory permissions
|
|
||||||
|
|
||||||
## Cost Estimates (as of March 2025)
|
|
||||||
|
|
||||||
- **5-minute video**: $0.10 - $0.20
|
|
||||||
- **25-minute video**: $0.50 - $1.00
|
|
||||||
- **60-minute video**: $1.20 - $2.40
|
|
||||||
|
|
||||||
**Using mini model**: Reduce costs by ~50%
|
|
||||||
|
|
||||||
## Quick Reference
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Single video (default quality)
|
|
||||||
uv run python transcribe.py "URL"
|
|
||||||
|
|
||||||
# Single video (cost-saving)
|
|
||||||
uv run python transcribe.py "URL" -m gpt-4o-mini-transcribe
|
|
||||||
|
|
||||||
# Multiple videos in parallel
|
|
||||||
for url in URL1 URL2 URL3; do
|
|
||||||
uv run python transcribe.py "$url" &
|
|
||||||
done
|
|
||||||
wait
|
|
||||||
|
|
||||||
# With custom output
|
|
||||||
uv run python transcribe.py "URL" -o custom_name.txt
|
|
||||||
|
|
||||||
# With context prompt
|
|
||||||
uv run python transcribe.py "URL" -p "Context about video content"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
$YOUTUBE_TRANSCRIBER_DIR/
|
|
||||||
├── transcribe.py # Main script
|
|
||||||
├── temp/ # Temporary audio files (auto-cleaned)
|
|
||||||
├── output/ # All transcripts saved here
|
|
||||||
├── README.md # Full documentation
|
|
||||||
└── pyproject.toml # Dependencies
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with Other Skills
|
|
||||||
|
|
||||||
**With fabric skill**: Process transcripts after generation
|
|
||||||
```bash
|
|
||||||
# 1. Transcribe
|
|
||||||
uv run python transcribe.py "URL"
|
|
||||||
|
|
||||||
# 2. Process with fabric
|
|
||||||
cat output/Video_Title_2025-11-10.txt | fabric -p extract_wisdom
|
|
||||||
```
|
|
||||||
|
|
||||||
**With research skill**: Transcribe source videos for research
|
|
||||||
```bash
|
|
||||||
# Transcribe multiple research videos in parallel
|
|
||||||
# Then analyze transcripts for insights
|
|
||||||
```
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- Script requires being in its directory to work correctly
|
|
||||||
- Always change to `$YOUTUBE_TRANSCRIBER_DIR` first
|
|
||||||
- Parallel execution is safe and recommended for multiple videos
|
|
||||||
- Use mini model for testing to save costs
|
|
||||||
- Output files automatically named with video title + date
|
|
||||||
- Temp files automatically cleaned after transcription
|
|
||||||
@ -1,18 +1,10 @@
|
|||||||
---
|
---
|
||||||
name: generate
|
|
||||||
description: Generate images from text prompts using local GPU inference
|
description: Generate images from text prompts using local GPU inference
|
||||||
allowed-tools: Bash(z-image:*)
|
allowed-tools: Bash(z-image:*)
|
||||||
---
|
---
|
||||||
|
|
||||||
# Z-Image - Local AI Image Generation
|
# Z-Image - Local AI Image Generation
|
||||||
|
|
||||||
## When to Activate This Skill
|
|
||||||
- "Generate an image of..."
|
|
||||||
- "Create a picture of..."
|
|
||||||
- "Make me an image"
|
|
||||||
- "z-image [prompt]"
|
|
||||||
- User describes something visual they want generated
|
|
||||||
|
|
||||||
## Tool
|
## Tool
|
||||||
|
|
||||||
**Binary:** `z-image` (in PATH via `~/bin/z-image`)
|
**Binary:** `z-image` (in PATH via `~/bin/z-image`)
|
||||||
Loading…
Reference in New Issue
Block a user