Convert backlog, project-plan, save-doc, youtube-transcriber, and z-image from skills/ to commands/ so they appear as user-invocable slash commands with plugin name prefixes. Update youtube-transcriber: switch default model from gpt-4o-transcribe to gpt-4o-mini-transcribe (OpenAI's current recommendation, half cost) and fix cost estimates that were 4-7x too high. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
45 lines
1.2 KiB
Markdown
45 lines
1.2 KiB
Markdown
---
|
|
description: Generate images from text prompts using local GPU inference
|
|
allowed-tools: Bash(z-image:*)
|
|
---
|
|
|
|
# Z-Image - Local AI Image Generation
|
|
|
|
## Tool
|
|
|
|
**Binary:** `z-image` (in PATH via `~/bin/z-image`)
|
|
**Script:** `~/.claude/skills/z-image/generate.py`
|
|
**Model:** Tongyi-MAI/Z-Image-Turbo (diffusers, bfloat16, CUDA)
|
|
**venv:** `~/.claude/skills/z-image/.venv/`
|
|
|
|
## Usage
|
|
|
|
```bash
|
|
# Basic generation
|
|
z-image "a cat sitting on a cloud"
|
|
|
|
# Custom output filename
|
|
z-image "sunset over mountains" -o sunset.png
|
|
|
|
# Custom output directory
|
|
z-image "forest path" -d ~/Pictures/ai-generated/
|
|
|
|
# More inference steps (higher quality, slower)
|
|
z-image "detailed portrait" -s 20
|
|
|
|
# Disable CPU offloading (faster if VRAM allows)
|
|
z-image "quick sketch" --no-offload
|
|
```
|
|
|
|
## Defaults
|
|
- **Steps:** 9 (fast turbo mode)
|
|
- **Guidance scale:** 0.0 (turbo model doesn't need guidance)
|
|
- **Output:** `zimage_TIMESTAMP_PROMPT.png` in current directory
|
|
- **VRAM:** Uses CPU offloading by default to reduce VRAM usage
|
|
|
|
## Notes
|
|
- First run downloads the model (~several GB)
|
|
- Requires NVIDIA GPU with CUDA support
|
|
- Output is always PNG format
|
|
- After generating, use the Read tool to show the image to the user
|