Plugin:skill pairs now read as noun:verb commands instead of repeating the plugin name. Also added concise descriptions to all SKILL.md frontmatter. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
53 lines
1.4 KiB
Markdown
53 lines
1.4 KiB
Markdown
---
|
|
name: generate
|
|
description: Generate images from text prompts using local GPU inference
|
|
allowed-tools: Bash(z-image:*)
|
|
---
|
|
|
|
# Z-Image - Local AI Image Generation
|
|
|
|
## When to Activate This Skill
|
|
- "Generate an image of..."
|
|
- "Create a picture of..."
|
|
- "Make me an image"
|
|
- "z-image [prompt]"
|
|
- User describes something visual they want generated
|
|
|
|
## Tool
|
|
|
|
**Binary:** `z-image` (in PATH via `~/bin/z-image`)
|
|
**Script:** `~/.claude/skills/z-image/generate.py`
|
|
**Model:** Tongyi-MAI/Z-Image-Turbo (diffusers, bfloat16, CUDA)
|
|
**venv:** `~/.claude/skills/z-image/.venv/`
|
|
|
|
## Usage
|
|
|
|
```bash
|
|
# Basic generation
|
|
z-image "a cat sitting on a cloud"
|
|
|
|
# Custom output filename
|
|
z-image "sunset over mountains" -o sunset.png
|
|
|
|
# Custom output directory
|
|
z-image "forest path" -d ~/Pictures/ai-generated/
|
|
|
|
# More inference steps (higher quality, slower)
|
|
z-image "detailed portrait" -s 20
|
|
|
|
# Disable CPU offloading (faster if VRAM allows)
|
|
z-image "quick sketch" --no-offload
|
|
```
|
|
|
|
## Defaults
|
|
- **Steps:** 9 (fast turbo mode)
|
|
- **Guidance scale:** 0.0 (turbo model doesn't need guidance)
|
|
- **Output:** `zimage_TIMESTAMP_PROMPT.png` in current directory
|
|
- **VRAM:** Uses CPU offloading by default to reduce VRAM usage
|
|
|
|
## Notes
|
|
- First run downloads the model (~several GB)
|
|
- Requires NVIDIA GPU with CUDA support
|
|
- Output is always PNG format
|
|
- After generating, use the Read tool to show the image to the user
|