Sync config: add save-doc skill, update agents/skills/plugins, clean sessions

- Add skills/save-doc/ skill
- Add sessions/2121928.json
- Delete cognitive-memory skill, memory-saver agent, save-memories command
- Update CLAUDE.md, pr-reviewer, issue-worker agents
- Update mcp-manager, create-scheduled-task, paper-dynasty skills
- Update plugins (blocklist, installed, known_marketplaces, marketplaces)
- Remove old session files

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Cal Corum 2026-03-18 02:00:54 -05:00
parent f9cbbd16a9
commit b464a10964
24 changed files with 251 additions and 1139 deletions

View File

@ -9,14 +9,6 @@ Automatic loads are NOT enough — Read loads required CLAUDE.md context along t
- Launch sub-agents with Sonnet model unless another model is specified by the user
- When you need a response from Cal (asking a question, need approval, blocked on input), send a voice notification: `curl -s -X POST http://localhost:8888/notify -H 'Content-Type: application/json' -d '{"message": "your brief message here"}'` — this plays TTS audio through the workstation speakers via the local Piper voice server
## Memory Protocol (Cognitive Memory)
- App: `/mnt/NV2/Development/cognitive-memory/` | Data: `~/.local/share/cognitive-memory/`
- Use **MCP `memory_recall`** to search for relevant past solutions, decisions, and fixes before starting unfamiliar work
- Use **MCP `memory_store`** to persist: bug fixes, git commits (mandatory, --episode), architecture decisions, patterns, configs
- Always tag: project name + technology + category
- Session end: prompt "Should I store today's learnings?"
- `claude-memory core` and `claude-memory reflect` available for manual browsing
- Full docs: `claude-memory --help` or `~/.claude/skills/cognitive-memory/SKILL.md` (skill layer) / `/mnt/NV2/Development/cognitive-memory/` (app code)
## Git Commits
- NEVER commit/add/push/tag without explicit user approval ("commit this", "go ahead")

View File

@ -1,7 +1,7 @@
---
name: issue-worker
description: Autonomous agent that fixes a single Gitea issue, creates a PR, and reports back. Used by the issue-dispatcher scheduled task.
tools: Bash, Glob, Grep, Read, Edit, Write, mcp__gitea-mcp__get_issue_by_index, mcp__gitea-mcp__edit_issue, mcp__gitea-mcp__create_pull_request, mcp__gitea-mcp__create_issue_comment, mcp__gitea-mcp__add_issue_labels, mcp__gitea-mcp__remove_issue_label, mcp__gitea-mcp__get_file_content, mcp__cognitive-memory__memory_recall, mcp__cognitive-memory__memory_store, mcp__cognitive-memory__memory_search, mcp__cognitive-memory__memory_relate
tools: Bash, Glob, Grep, Read, Edit, Write, mcp__gitea-mcp__get_issue_by_index, mcp__gitea-mcp__edit_issue, mcp__gitea-mcp__create_pull_request, mcp__gitea-mcp__create_issue_comment, mcp__gitea-mcp__add_issue_labels, mcp__gitea-mcp__remove_issue_label, mcp__gitea-mcp__get_file_content
model: sonnet
permissionMode: bypassPermissions
---
@ -14,7 +14,7 @@ You are an autonomous agent that fixes a single Gitea issue and opens a PR for h
### Phase 1: Understand
1. **Read the issue.** Parse the issue details from your prompt. If needed, use `mcp__gitea-mcp__get_issue_by_index` for full context. Use `mcp__cognitive-memory__memory_recall` to check for related past work or decisions.
1. **Read the issue.** Parse the issue details from your prompt. If needed, use `mcp__gitea-mcp__get_issue_by_index` for full context.
2. **Read the project's CLAUDE.md.** Before touching any code, read `CLAUDE.md` at the repo root (and any nested CLAUDE.md files it references). These contain critical conventions, test commands, and coding standards you must follow.
@ -88,17 +88,6 @@ You are an autonomous agent that fixes a single Gitea issue and opens a PR for h
- Link to the PR
- Brief summary of the fix approach
### Phase 4: Remember
15. **Store a memory** of the fix using `mcp__cognitive-memory__memory_store`:
- `type`: "fix" (or "solution" / "code_pattern" if more appropriate)
- `title`: concise and searchable (e.g., "Fix: decay filter bypass in semantic_recall")
- `content`: markdown with problem, root cause, solution, and files changed
- `tags`: include project name, language, and relevant technology tags
- `importance`: 0.50.7 for standard fixes, 0.8+ for cross-project patterns
- `episode`: true
16. **Connect the memory.** Search for related existing memories with `mcp__cognitive-memory__memory_search` using the project name and relevant tags, then create edges with `mcp__cognitive-memory__memory_relate` to link your new memory to related ones. Every stored memory should have at least one edge.
## Output Format

View File

@ -1,84 +0,0 @@
---
name: memory-saver
description: Stores session learnings as cognitive memories. Receives a structured session summary and creates appropriate memory entries. Run in background after significant work sessions.
model: sonnet
permissions:
allow:
- "Bash(claude-memory:*)"
- "mcp__cognitive-memory__*"
---
# Memory Saver Agent
You receive a structured summary of work done in a Claude Code session. Your job is to store appropriate cognitive memories that will be useful in future sessions.
## Instructions
### Phase 1: Store Memories
1. Read the session summary provided in your prompt
2. Identify distinct, storable items — each should be ONE of:
- **solution** — a problem that was solved and how
- **decision** — an architectural or design choice with rationale
- **fix** — a bug fix or correction
- **configuration** — a config that worked
- **code_pattern** — a reusable pattern discovered
- **workflow** — a process or sequence
- **procedure** — a multi-step workflow with preconditions
- **insight** — a cross-cutting observation
3. Store each item using the MCP tools (preferred) or CLI fallback
4. Always include: project tag, technology tags, category tag
5. Set importance: 0.8-1.0 critical/multi-project, 0.5-0.7 standard, 0.3-0.4 minor
6. Track the memory IDs returned from each store call — you need them for Phase 2
### Phase 2: Create Edges (REQUIRED)
After all memories are stored, you MUST connect them to the graph. Orphan nodes with zero connections are far less useful during traversal.
1. **Search for related existing memories** using `memory_search` with key tags from the new memories (project name, technology, related concepts). Search 2-3 different tag combinations to find good connection points.
2. **Create edges** using `memory_relate` between:
- **New-to-new**: memories stored in this session that reference each other (e.g., a decision that motivates a fix, a config that implements a decision)
- **New-to-existing**: new memories and existing memories that share the same project, feature, or system
- **Temporal chains**: if the summary mentions prior work, link new memories to it with `FOLLOWS`
3. **Edge type guidance:**
- `CAUSES` — one thing motivated or led to another
- `FOLLOWS` — temporal/sequential relationship (prior work -> new work)
- `RELATED_TO` — same system, feature, or concept
- `DEPENDS_ON` — one requires the other to function
- `CONTRADICTS` — supersedes or corrects a previous memory
4. **Minimum expectation:** every memory stored in this session should have at least one edge. If you stored 5 memories, you should create at least 5 edges (typically more).
## What to store
- Bug fixes with root cause and solution
- Architecture/design decisions with rationale
- New configurations that worked
- Performance improvements with before/after numbers
- Patterns that could apply to other projects
- Deployment or infrastructure changes
## What NOT to store
- Routine file edits without insight
- Session metadata (message counts, tool counts)
- Anything already stored during the session (check the summary for this)
- Speculative or incomplete work
## Storage format
Use `mcp__cognitive-memory__memory_store` with:
- `type`: one of the types listed above
- `title`: concise, descriptive, searchable (e.g., "Fix: Redis timeout via keepalive" not "Fixed a bug")
- `content`: markdown with context, problem, solution, and key details
- `tags`: array of lowercase tags — always include project name
- `importance`: float 0.0-1.0
- `episode`: true (logs to daily episode file)
## Rules
- Create separate memories for distinct topics — don't lump unrelated work into one
- Titles should be grep-friendly — someone searching for the topic should find it
- Content should be self-contained — readable without the original session context
- If the summary mentions memories were already stored during the session, don't duplicate them
- Be thorough but not excessive — 1-6 memories per session is typical
- **Never skip Phase 2** — edges are what make the memory graph useful. A memory without edges is nearly invisible to future traversals.

View File

@ -1,7 +1,7 @@
---
name: pr-reviewer
description: Reviews a Gitea pull request for correctness, conventions, and security. Posts a formal review via Gitea API.
tools: Bash, Glob, Grep, Read, mcp__gitea-mcp__get_pull_request_by_index, mcp__gitea-mcp__get_pull_request_diff, mcp__gitea-mcp__create_pull_request_review, mcp__gitea-mcp__add_issue_labels, mcp__gitea-mcp__remove_issue_label, mcp__gitea-mcp__create_repo_label, mcp__gitea-mcp__list_repo_labels, mcp__cognitive-memory__memory_recall, mcp__cognitive-memory__memory_store, mcp__cognitive-memory__memory_search, mcp__cognitive-memory__memory_relate
tools: Bash, Glob, Grep, Read, mcp__gitea-mcp__get_pull_request_by_index, mcp__gitea-mcp__get_pull_request_diff, mcp__gitea-mcp__create_pull_request_review, mcp__gitea-mcp__add_issue_labels, mcp__gitea-mcp__remove_issue_label, mcp__gitea-mcp__create_repo_label, mcp__gitea-mcp__list_repo_labels
disallowedTools: Edit, Write
model: sonnet
permissionMode: bypassPermissions
@ -21,12 +21,7 @@ You are an automated PR reviewer. You review Gitea pull requests for correctness
3. **Read project conventions.** Read `CLAUDE.md` at the repo root (and any nested CLAUDE.md files it references). These contain coding standards and conventions you must evaluate against.
4. **Check cognitive memory.** Use `mcp__cognitive-memory__memory_recall` to search for:
- Past decisions and patterns for this repo
- Related fixes or known issues in the changed areas
- Architecture decisions that affect the changes
5. **Read changed files in full.** For each file in the diff, read the complete file (not just the diff hunks) to understand the full context of the changes.
4. **Read changed files in full.** For each file in the diff, read the complete file (not just the diff hunks) to understand the full context of the changes.
### Phase 2: Review
@ -76,17 +71,6 @@ Evaluate the PR against this checklist:
- `event`: your verdict (APPROVED, REQUEST_CHANGES, or COMMENT)
- `body`: your formatted review (see Review Format below)
### Phase 4: Remember
8. **Store a memory** of the review using `mcp__cognitive-memory__memory_store`:
- `type`: "workflow"
- `title`: concise summary (e.g., "PR review: cognitive-memory#15 — decay filter fix")
- `content`: verdict, key findings, files reviewed
- `tags`: include `pr-reviewer`, project name, and relevant technology tags
- `importance`: 0.4 for clean approvals, 0.6 for reviews with substantive feedback
- `episode`: true
9. **Connect the memory.** Search for related memories and create edges with `mcp__cognitive-memory__memory_relate`.
## Review Format

View File

@ -1,40 +0,0 @@
---
allowed-tools: Task
description: Save session learnings to cognitive memory
---
**IMPORTANT: Do NOT narrate your steps. Do all analysis silently. Your only visible output should be ONE of:**
- "Nothing new worth storing since the last save."
- "Saving N memories in the background." (followed by launching the agent)
## Process (do this silently)
1. **Find cutoff**: Scan for the most recent `memory_store` MCP call or `claude-memory store` Bash call. Only analyze conversation AFTER that point. If none found, analyze everything.
2. **Analyze**: Identify storable items after the cutoff — solutions, decisions, fixes, configs, patterns, insights. Include project names, technical details, rationale, before/after data.
3. **Gate**: If nothing after the cutoff is worth a memory (routine chat, minor reads, trivial refinements), say "Nothing new worth storing since the last save." and stop.
4. **Build summary**: Create a structured prompt for the agent:
```
PROJECT: <name(s)>
ITEMS:
1. [type] Title / Tags / Importance / Content
2. ...
```
5. **Launch agent**: Spawn in background with sonnet model:
```
Task(subagent_type="memory-saver", model="sonnet", run_in_background=true,
description="Store session memories", prompt="<summary>")
```
6. **Confirm**: Say "Saving N memories in the background."
## Guidelines
- Be thorough — capture everything worth remembering
- Don't duplicate memories already stored during the session
- Each item should be self-contained and useful on its own
- 1-6 items per session is typical; more is fine for large sessions
- Prefer specific, searchable titles over vague ones

View File

@ -1,5 +1,5 @@
{
"fetchedAt": "2026-03-17T06:30:47.668Z",
"fetchedAt": "2026-03-18T06:15:49.939Z",
"plugins": [
{
"plugin": "code-review@claude-plugins-official",

View File

@ -23,10 +23,10 @@
"playground@claude-plugins-official": [
{
"scope": "user",
"installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/playground/78497c524da3",
"version": "78497c524da3",
"installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/playground/6b70f99f769f",
"version": "6b70f99f769f",
"installedAt": "2026-02-18T19:51:28.422Z",
"lastUpdated": "2026-03-17T02:00:48.526Z",
"lastUpdated": "2026-03-17T22:45:51.123Z",
"gitCommitSha": "261ce4fba4f2c314c490302158909a32e5889c88"
}
],
@ -43,10 +43,10 @@
"frontend-design@claude-plugins-official": [
{
"scope": "user",
"installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/frontend-design/78497c524da3",
"version": "78497c524da3",
"installPath": "/home/cal/.claude/plugins/cache/claude-plugins-official/frontend-design/6b70f99f769f",
"version": "6b70f99f769f",
"installedAt": "2026-02-22T05:53:45.091Z",
"lastUpdated": "2026-03-17T02:00:48.520Z",
"lastUpdated": "2026-03-17T22:45:51.117Z",
"gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec"
}
],

View File

@ -5,7 +5,7 @@
"url": "https://github.com/anthropics/claude-plugins-official.git"
},
"installLocation": "/home/cal/.claude/plugins/marketplaces/claude-plugins-official",
"lastUpdated": "2026-03-12T05:01:29.190Z"
"lastUpdated": "2026-03-17T14:00:49.109Z"
},
"claude-code-plugins": {
"source": {
@ -13,6 +13,6 @@
"repo": "anthropics/claude-code"
},
"installLocation": "/home/cal/.claude/plugins/marketplaces/claude-code-plugins",
"lastUpdated": "2026-03-17T07:00:48.480Z"
"lastUpdated": "2026-03-18T07:00:48.444Z"
}
}

@ -1 +1 @@
Subproject commit 079dc856c6c990de5be28e288939293905c154c1
Subproject commit a3d9426e3e183d1fdc560fcc8a69e9d854f040c9

@ -1 +1 @@
Subproject commit 78497c524da3762865d47377357c30af5b50d522
Subproject commit 6b70f99f769f58b9b642dc8a271936b17862cf8c

View File

@ -1 +0,0 @@
{"pid":1737831,"sessionId":"28ed0c3c-06cc-4ec6-9f35-6f36b99ff751","cwd":"/mnt/NV2/Development/paper-dynasty/discord-app","startedAt":1773680541099}

View File

@ -1 +0,0 @@
{"pid":1801940,"sessionId":"3a51ec59-c0b2-4ab6-a227-d634635dec54","cwd":"/mnt/NV2/Development/paper-dynasty/database","startedAt":1773690591005}

1
sessions/2121928.json Normal file
View File

@ -0,0 +1 @@
{"pid":2121928,"sessionId":"03413088-aceb-4863-a6d8-358355abf06d","cwd":"/mnt/NV2/Development/claude-home","startedAt":1773803933365}

View File

@ -1,421 +0,0 @@
# Cognitive Memory Schema
## Memory File Format
Each memory is a markdown file with YAML frontmatter. Filename format: `{slugified-title}-{6char-uuid}.md`
Example: `graph/solutions/fixed-redis-connection-timeouts-a1b2c3.md`
```markdown
---
id: a1b2c3d4-e5f6-7890-abcd-ef1234567890
type: solution
title: "Fixed Redis connection timeouts"
tags: [redis, timeout, production, homelab]
importance: 0.8
confidence: 0.8
created: 2025-12-06T16:25:58+00:00
updated: 2025-12-06T16:25:58+00:00
relations:
- target: def789ab-...
type: SOLVES
direction: outgoing
strength: 0.8
context: "Keepalive prevents idle disconnections"
---
Added socket_keepalive=True and socket_timeout=300 to Redis connection configuration.
```
### Frontmatter Fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `id` | string (UUID) | Yes | Unique identifier |
| `type` | string | Yes | Memory type (see below) |
| `title` | string (quoted) | Yes | Descriptive title |
| `tags` | list[string] | No | Categorization tags (inline YAML list) |
| `importance` | float 0.0-1.0 | Yes | Importance score |
| `confidence` | float 0.0-1.0 | No | Confidence score (default 0.8) |
| `steps` | list[string] | No | Ordered steps (procedure type only) |
| `preconditions` | list[string] | No | Required preconditions (procedure type only) |
| `postconditions` | list[string] | No | Expected postconditions (procedure type only) |
| `created` | string (ISO 8601) | Yes | Creation timestamp |
| `updated` | string (ISO 8601) | Yes | Last update timestamp |
| `relations` | list[Relation] | No | Relationships to other memories |
### Memory Types
| Type | Directory | Description |
|------|-----------|-------------|
| `solution` | `graph/solutions/` | Fix or resolution to a problem |
| `fix` | `graph/fixes/` | Code-level fix or patch |
| `decision` | `graph/decisions/` | Architecture or design choice |
| `configuration` | `graph/configurations/` | Working configuration |
| `problem` | `graph/problems/` | Issue or challenge |
| `workflow` | `graph/workflows/` | Process or sequence |
| `code_pattern` | `graph/code-patterns/` | Reusable code pattern |
| `error` | `graph/errors/` | Specific error condition |
| `general` | `graph/general/` | General learning |
| `procedure` | `graph/procedures/` | Structured workflow with steps/preconditions/postconditions |
| `insight` | `graph/insights/` | Cross-cutting pattern from reflection cycles |
### Relation Fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `target` | string (UUID) | Yes | Target memory ID |
| `type` | string | Yes | Relationship type |
| `direction` | string | Yes | `outgoing` or `incoming` |
| `strength` | float 0.0-1.0 | No | Relationship strength |
| `context` | string (quoted) | No | Context description |
| `edge_id` | string (UUID) | No | Link to edge file for rich description |
### Relationship Types
`SOLVES`, `CAUSES`, `BUILDS_ON`, `ALTERNATIVE_TO`, `REQUIRES`, `FOLLOWS`, `RELATED_TO`
### Edge File Format
Edge files live in `graph/edges/` with full descriptions. Filename: `{from-slug}--{TYPE}--{to-slug}-{6char}.md`
```markdown
---
id: <uuid>
type: SOLVES
from_id: <uuid>
from_title: "Fixed Redis connection timeouts"
to_id: <uuid>
to_title: "Redis connection drops under load"
strength: 0.8
created: <iso>
updated: <iso>
---
The keepalive fix directly resolves the idle disconnection problem because...
```
Edge frontmatter fields: id, type, from_id, from_title, to_id, to_title, strength, created, updated.
---
## _index.json
Computed index for fast lookups. Rebuilt by `reindex` command. **Source of truth: markdown files.**
```json
{
"version": 2,
"updated": "2025-12-13T10:30:00+00:00",
"count": 313,
"edges": {
"<edge-uuid>": {
"type": "SOLVES",
"from_id": "<uuid>",
"to_id": "<uuid>",
"strength": 0.8,
"path": "graph/edges/fixed-redis--SOLVES--redis-drops-a1b2c3.md"
}
},
"entries": {
"a1b2c3d4-e5f6-7890-abcd-ef1234567890": {
"title": "Fixed Redis connection timeouts",
"type": "solution",
"tags": ["redis", "timeout", "production", "homelab"],
"importance": 0.8,
"confidence": 0.8,
"created": "2025-12-06T16:25:58+00:00",
"updated": "2025-12-06T16:25:58+00:00",
"path": "graph/solutions/fixed-redis-connection-timeouts-a1b2c3.md",
"relations": [
{
"target": "def789ab-...",
"type": "SOLVES",
"direction": "outgoing",
"strength": 0.8,
"context": "Keepalive prevents idle disconnections"
}
]
}
}
}
```
**Notes:**
- `_index.json` is gitignored (derived data)
- Can be regenerated at any time with `python client.py reindex`
- Entries mirror frontmatter fields for fast search without opening files
---
## _state.json
Mutable runtime state tracking access patterns and decay scores. **Kept separate from frontmatter to avoid churning git history with access-count updates.**
```json
{
"version": 1,
"updated": "2025-12-13T10:30:00+00:00",
"entries": {
"a1b2c3d4-e5f6-7890-abcd-ef1234567890": {
"access_count": 5,
"last_accessed": "2025-12-13T10:15:00+00:00",
"decay_score": 0.75
}
}
}
```
**Notes:**
- `_state.json` is gitignored (mutable, session-specific data)
- Access count increments on `get` operations
- Decay scores recalculated by `python client.py decay`
---
## Episode File Format
Daily markdown files in `episodes/` directory. Append-only, one file per day.
Filename format: `YYYY-MM-DD.md`
```markdown
# 2025-12-13
## 10:30 - Fixed Discord bot reconnection
- **Type:** fix
- **Tags:** major-domo, discord, python
- **Memory:** [discord-bot-reconnection](../graph/fixes/discord-bot-reconnection-a1b2c3.md)
- **Summary:** Implemented exponential backoff for reconnections
## 11:15 - Chose websocket over polling
- **Type:** decision
- **Tags:** major-domo, architecture
- **Memory:** [websocket-over-polling](../graph/decisions/websocket-over-polling-d4e5f6.md)
- **Summary:** WebSocket provides lower latency for real-time game state updates
```
**Notes:**
- Episodes are chronological session logs
- Entries link back to graph memories when available
- Append-only (never edited, only appended)
- Useful for reflection and session review
---
## CORE.md Format
Auto-generated summary of highest-relevance memories. Loaded into system prompt at session start.
```markdown
# Memory Core (auto-generated)
> Last updated: 2025-12-13 | Active memories: 180/313 | Next refresh: daily (systemd timer)
## Critical Solutions
- [title](relative/path.md) (tag1, tag2)
## Active Decisions
- [title](relative/path.md) (tag1, tag2)
## Key Fixes
- [title](relative/path.md) (tag1, tag2)
## Configurations
- [title](relative/path.md) (tag1, tag2)
## Patterns & Workflows
- [title](relative/path.md) (tag1, tag2)
```
**Notes:**
- Budget: ~3K tokens (~12,000 chars)
- Regenerated by `python client.py core`
- Memories included based on decay_score (must be >= 0.2)
- Grouped by type category, sorted by decay score within each group
- Capped at 15 entries per section
---
## Decay Model
```
decay_score = importance × e^(-λ × days_since_access) × log2(access_count + 1) × type_weight
```
Where:
- `λ = 0.03` (half-life ~23 days)
- `days_since_access` = days since `_state.json` `last_accessed`
- `access_count` = from `_state.json`
- `type_weight` = per-type multiplier (see below)
- For `access_count == 0`, `usage_factor = 0.5` (new memories start at half strength)
### Type Weights
| Type | Weight |
|------|--------|
| `procedure` | 1.4 |
| `decision` | 1.3 |
| `insight` | 1.25 |
| `solution` | 1.2 |
| `code_pattern` | 1.1 |
| `configuration` | 1.1 |
| `fix` | 1.0 |
| `workflow` | 1.0 |
| `problem` | 0.9 |
| `error` | 0.8 |
| `general` | 0.8 |
### Thresholds
| Range | Status |
|-------|--------|
| 0.5+ | Active |
| 0.2-0.5 | Fading |
| 0.05-0.2 | Dormant |
| <0.05 | Archived |
### Vault
Pinned memories (in `vault/`) have a decay score of 999.0 (effectively infinite).
---
## REFLECTION.md Format
Auto-generated summary of memory themes, cross-project patterns, and access statistics. Generated by `python client.py reflection` or automatically during `reflect`.
```markdown
# Reflection Summary (auto-generated)
> Last updated: 2026-02-13 | Last reflection: 2026-02-13 | Total reflections: 2
## Themes
Top tag co-occurrences revealing recurring themes:
- **fix + python**: 52 memories ("Fix S3 upload...", "fix_cardpositions.py...")
## Cross-Project Patterns
Tags that span multiple projects:
- **fix**: appears in major-domo (38), vagabond-rpg (22), paper-dynasty (19)
## Most Accessed
Top 10 memories by access count:
1. [Title](graph/type/filename.md) - N accesses
## Recent Insights
Insight-type memories:
- [Title](graph/insights/filename.md) - content preview...
## Consolidation History
- Total merges performed: 0
```
**Sections:**
1. **Themes** - Top 8 tag co-occurrences with example memory titles (top 3)
2. **Cross-Project Patterns** - Tags spanning 2+ known projects with per-project counts
3. **Most Accessed** - Top 10 memories ranked by `_state.json` access_count
4. **Recent Insights** - Latest insight-type memories with 80-char content preview
5. **Consolidation History** - Merge count from episode files
**Known projects:** major-domo, paper-dynasty, homelab, vagabond-rpg, foundryvtt, strat-gameplay-webapp
---
## _embeddings.json
Ollama embedding vectors for semantic search. **Gitignored** (derived, regenerated by `python client.py embed`).
```json
{
"model": "nomic-embed-text",
"updated": "2026-02-13T12:00:00+00:00",
"entries": {
"a1b2c3d4-e5f6-7890-abcd-ef1234567890": [0.0123, -0.0456, ...],
...
}
}
```
**Notes:**
- Vectors generated from `"{title}. {content_preview}"` per memory
- Uses Ollama `nomic-embed-text` model at `http://localhost:11434`
- Batched in groups of 50, 300-second timeout for first-time model pull
- Falls back gracefully if Ollama is unavailable
- Must be refreshed after adding new memories (`python client.py embed`)
---
## Procedure Memory Format
Procedure-type memories have additional frontmatter fields for structured steps:
```markdown
---
id: a1b2c3d4-...
type: procedure
title: "Deploy Major Domo to production"
tags: [major-domo, deploy, docker]
importance: 0.8
confidence: 0.8
steps:
- "Run tests"
- "Build docker image"
- "Push to registry"
- "Deploy to LXC"
preconditions:
- "All tests pass"
- "On main branch"
postconditions:
- "Service healthy"
- "Discord bot online"
created: 2026-02-13T12:00:00+00:00
updated: 2026-02-13T12:00:00+00:00
---
Standard deploy workflow for the Major Domo Discord bot.
```
**Notes:**
- `steps`, `preconditions`, `postconditions` are optional lists
- Values are quoted YAML strings in the list
- Procedures have the highest decay weight (1.4) - they persist longest
---
## _config.json
Embedding provider configuration. **Gitignored** (may contain API key).
```json
{
"embedding_provider": "ollama",
"openai_api_key": null,
"ollama_model": "nomic-embed-text",
"openai_model": "text-embedding-3-small"
}
```
**Notes:**
- `embedding_provider`: `"ollama"` (default) or `"openai"`
- Provider changes trigger automatic re-embedding (dimension mismatch safety: ollama=768, openai=1536)
- Configure via: `claude-memory config --provider openai --openai-key "sk-..."`
---
## .gitignore
```
_state.json
_index.json
_embeddings.json
_config.json
```
Only markdown files (memories, CORE.md, REFLECTION.md, episodes) are git-tracked. Index, state, embeddings, and config are derived/mutable data that can be regenerated.
---
*Schema version: 3.0.0 | Created: 2026-02-13 | Updated: 2026-02-19*

View File

@ -1,497 +0,0 @@
---
name: cognitive-memory
description: Markdown-based memory system with decay scoring, episodic logging, and auto-curated CORE.md. USE WHEN storing learnings, recalling past solutions, tracking decisions, or building knowledge connections across sessions.
---
# Cognitive Memory - Markdown-Based AI Memory
## Purpose
Cognitive Memory provides persistent, human-readable memory storage as markdown files with YAML frontmatter. Unlike MemoryGraph's opaque SQLite database, every memory is a browseable, editable markdown file organized in a git-tracked repository.
**Key features:**
- Human-readable markdown files with YAML frontmatter
- Decay scoring to surface relevant memories and let stale ones fade
- Episodic session logs for chronological context
- Auto-curated CORE.md auto-loaded into system prompt via MEMORY.md symlinks
- Git-tracked for history, rollback, and diff visibility
- Semantic search via Ollama embeddings (nomic-embed-text)
- Reflection cycles with union-find clustering to surface patterns
- Procedural memories with structured steps/preconditions/postconditions
- Tag co-occurrence analysis for pattern discovery
- REFLECTION.md with theme analysis and cross-project patterns
## When to Activate This Skill
**Explicit triggers:**
- "Remember this for later"
- "Store this solution / fix / decision"
- "What did we learn about X?"
- "Find solutions for this problem"
- "What's in memory about...?"
- "What patterns have we seen?"
- "Run a reflection"
- "What tags are related to X?"
- "Store this procedure / workflow"
**Automatic triggers (per CLAUDE.md Memory Protocol):**
- After fixing a bug
- After a git commit
- After making an architecture decision
- After discovering a reusable pattern
- After a successful configuration
- After troubleshooting sessions
- At session start (CORE.md auto-loads via symlinks; read REFLECTION.md for theme context)
- Periodically (reflect to cluster memories, suggest missing tags)
## Quick Reference
**CLI entrypoint**: `claude-memory` (wrapper at `~/.local/bin/claude-memory`)
### CLI Commands
All commands support `--help` for full argument details. Key non-obvious features:
```bash
# --episode flag auto-logs a session entry when storing
claude-memory store --type solution --title "Fixed X" --content "..." --tags "t1,t2" --episode
# recall uses semantic+keyword merge by default (when embeddings exist)
claude-memory recall "timeout error"
# use --no-semantic for keyword-only (faster, ~3ms vs ~200ms)
claude-memory recall "timeout error" --no-semantic
# procedure type takes structured steps/preconditions/postconditions
claude-memory procedure --title "Deploy flow" --content "..." \
--steps "test,build,push,deploy" --preconditions "tests pass" --postconditions "healthy"
# reflect clusters recent memories; reflection regenerates REFLECTION.md
claude-memory reflect --since 2026-01-01
claude-memory reflection
# tags sub-commands: list (counts), related (co-occurrence), suggest (for a memory)
claude-memory tags list
claude-memory tags related "python"
claude-memory tags suggest <memory_id>
```
**Full command list:** `store`, `recall`, `get`, `search`, `update`, `delete`, `stats`, `recent`, `decay`, `core`, `episode`, `reindex`, `pin`, `embed`, `reflect`, `reflection`, `tags`, `procedure`, `merge`, `edge-get`, `edge-search`, `edge-update`, `edge-delete`, `config`, `graphs`, `graph-create`
### Memory Types
| Type | When to Use | Decay Weight |
|------|-------------|--------------|
| `procedure` | Structured workflow with steps/preconditions/postconditions | 1.4 (decays slowest) |
| `decision` | Architecture or design choices | 1.3 |
| `insight` | Cross-cutting pattern from reflection cycles | 1.25 |
| `solution` | Fix or resolution to a problem | 1.2 |
| `code_pattern` | Reusable code pattern | 1.1 |
| `configuration` | Config that worked | 1.1 |
| `fix` | Code-level patch or correction | 1.0 |
| `workflow` | Process or sequence | 1.0 |
| `problem` | Issue or challenge encountered | 0.9 |
| `error` | Specific error condition | 0.8 |
| `general` | Catch-all for other learnings | 0.8 |
### Importance Scale
| Score | Meaning |
|-------|---------|
| 0.8-1.0 | Critical - affects multiple projects, prevents major issues |
| 0.5-0.7 | Standard - useful pattern or solution |
| 0.3-0.4 | Minor - nice-to-know, edge cases |
## Directory Structure
### Skill layer (Claude Code reads this)
```
~/.claude/skills/cognitive-memory/
├── SKILL.md # This file
└── SCHEMA.md # Memory file format reference
```
### Application code
```
/mnt/NV2/Development/cognitive-memory/
├── client.py # Core API + CLI entrypoint
├── cli.py # CLI interface
├── common.py # Shared utilities
├── analysis.py # Reflection/analysis
├── edges.py # Edge management
├── embeddings.py # Embedding support
├── mcp_server.py # MCP server for Claude Code tools
├── feature.json # Version metadata
├── scripts/
│ ├── session_memory.py # SessionEnd hook — auto-stores session learnings
│ ├── edge-proposer.py # Edge proposer
│ └── memory-git-sync.sh # Git sync for data dir
└── systemd/
├── README.md # Install instructions for timers
├── cognitive-memory-daily.* # Daily: decay, core, git sync
├── cognitive-memory-embed.* # Hourly: refresh embeddings
└── cognitive-memory-weekly.* # Weekly: reflection cycle
```
## Data Directory Structure
Data lives at `$XDG_DATA_HOME/cognitive-memory/` (default: `~/.local/share/cognitive-memory/`).
Override with `COGNITIVE_MEMORY_DIR` env var. Named graphs live as siblings (see [Multi-Graph](#multi-graph) below).
```
~/.local/share/cognitive-memory/
├── CORE.md # Auto-curated ~3K token summary
├── REFLECTION.md # Theme analysis & cross-project patterns
├── graph/ # Semantic memories (knowledge graph)
│ ├── solutions/ # Solution memories
│ ├── fixes/ # Fix memories
│ ├── decisions/ # Decision memories
│ ├── configurations/ # Configuration memories
│ ├── problems/ # Problem memories
│ ├── workflows/ # Workflow memories
│ ├── code-patterns/ # Code pattern memories
│ ├── errors/ # Error memories
│ ├── general/ # General memories
│ ├── procedures/ # Procedural memories (steps/pre/postconditions)
│ ├── insights/ # Reflection-generated insights
│ └── edges/ # Rich edge files (first-class relationship objects)
├── episodes/ # Daily session logs (YYYY-MM-DD.md)
├── vault/ # Pinned memories (never decay)
├── _index.json # Computed index for fast lookups
├── _state.json # Mutable state (access counts, decay scores)
├── _embeddings.json # Ollama embedding vectors (semantic search)
└── .gitignore # Ignores _state.json, _index.json, _embeddings.json
```
## Decay Model
Memories have a decay score that determines their relevance over time:
```
decay_score = importance × e^(-0.03 × days_since_access) × log2(access_count + 1) × type_weight
```
| Score Range | Status | Behavior |
|-------------|--------|----------|
| 0.5+ | Active | Included in search results, eligible for CORE.md |
| 0.2-0.5 | Fading | Deprioritized in results |
| 0.05-0.2 | Dormant | Only found via explicit search |
| <0.05 | Archived | Hidden from search (files remain on disk) |
**Half-life:** ~23 days. Access a memory to reset its timer.
**Vault:** Pinned memories have infinite decay score.
## Workflow Patterns
### 1. Store a Bug Fix (with Edge)
```bash
# Store the solution
claude-memory store --type solution \
--title "Fixed Redis connection timeouts" \
--content "Added socket_keepalive=True and socket_timeout=300..." \
--tags "redis,timeout,production" --importance 0.8
# Returns: memory_id = abc-123
# Link it to the problem memory that prompted the fix
# (use memory_recall or memory_search to find the related memory first)
claude-memory relate abc-123 def-456 SOLVES \
--description "Keepalive fix resolves the idle timeout disconnections"
# Log the episode
claude-memory episode --type fix --title "Fixed Redis timeouts" \
--tags "redis,production" --summary "Keepalive prevents idle disconnections"
```
### 2. Recall Before Starting Work
```bash
# Check what we know about a topic
claude-memory recall "authentication oauth"
# Find solutions for similar problems
claude-memory search --types "solution" --tags "python,api"
```
### 3. Document a Decision
```bash
claude-memory store --type decision \
--title "Chose PostgreSQL over MongoDB" \
--content "Need ACID transactions, complex joins..." \
--tags "database,architecture" --importance 0.9
```
### 4. Semantic Recall (Deeper Matching)
```bash
# First-time setup: generate embeddings (requires Ollama + nomic-embed-text)
claude-memory embed
# Recall uses semantic+keyword merge by default
claude-memory recall "authentication timeout"
# Use --no-semantic for keyword-only
claude-memory recall "authentication timeout" --no-semantic
```
### 5. Store a Procedure
```bash
claude-memory procedure \
--title "Deploy Major Domo to production" \
--content "Standard deploy workflow for the Discord bot" \
--steps "run tests,build docker image,push to registry,deploy to LXC" \
--preconditions "all tests pass,on main branch,version bumped" \
--postconditions "service healthy,Discord bot online" \
--tags "major-domo,deploy,docker" --importance 0.8
```
### 6. Run a Reflection Cycle
```bash
# Analyze recent memories for patterns
claude-memory reflect
# Review memories since a specific date
claude-memory reflect --since 2026-01-15
# Preview without modifying state
claude-memory reflect --dry-run
# Generate/refresh the REFLECTION.md summary
claude-memory reflection
```
### 7. Tag Analysis
```bash
# What tags exist and how often?
claude-memory tags list --limit 20
# What tags co-occur with "python"?
claude-memory tags related "python"
# What tags should this memory have?
claude-memory tags suggest <memory_id>
```
### 8. Create Edges Between Related Memories
Edges are first-class relationship objects that connect memories into a traversable graph. Always look for opportunities to link memories — this makes recall and future RAG retrieval significantly richer.
**Relation types:** `SOLVES`, `CAUSES`, `BUILDS_ON`, `ALTERNATIVE_TO`, `REQUIRES`, `FOLLOWS`, `RELATED_TO`
```bash
# Via MCP (preferred in Claude Code sessions):
# memory_relate(from_id, to_id, rel_type, description, strength)
# Via CLI:
claude-memory relate <from_id> <to_id> BUILDS_ON \
--description "Extended the deploy procedure with rollback steps"
# Find what a memory is connected to (up to 3 hops deep)
claude-memory related <memory_id> --max-depth 2
# Search edges by type or connected memory
claude-memory edge-search --types SOLVES
claude-memory edge-search --from <memory_id>
```
**When to create edges:**
- Solution fixes a known problem → `SOLVES`
- New memory extends or refines an earlier one → `BUILDS_ON`
- Error leads to a known failure mode → `CAUSES`
- Two approaches to the same problem → `ALTERNATIVE_TO`
- Memory depends on another being true/done first → `REQUIRES`
- Steps in a sequence → `FOLLOWS`
- Conceptually related but no specific type fits → `RELATED_TO`
### 9. Session Maintenance
```bash
# Recalculate decay scores
claude-memory decay
# Regenerate CORE.md
claude-memory core
# Refresh embeddings after adding new memories
claude-memory embed
# View what's fading
claude-memory search --min-importance 0.3
```
## CORE.md
Auto-generated summary of highest-relevance memories (~1K tokens), auto-loaded into the system prompt at every session via MEMORY.md symlinks. Each project's `~/.claude/projects/<project>/memory/MEMORY.md` symlinks to `CORE.md` in the data directory. Symlinks are refreshed daily by `cognitive-memory-daily.service` (via `claude-memory-symlinks` script). Regenerated by the `core` CLI command.
## REFLECTION.md
Auto-generated summary of memory themes, cross-project patterns, and access statistics. Contains 5 sections: Themes (tag co-occurrences), Cross-Project Patterns (tags spanning multiple projects), Most Accessed (top 10 by access count), Recent Insights (latest insight-type memories), and Consolidation History. Generated by `reflection` CLI command or automatically during `reflect`.
## Semantic Search
Requires Ollama running locally with the `nomic-embed-text` model. Generate embeddings with `claude-memory embed`. Recall uses semantic+keyword merge by default when embeddings exist (~200ms with warm cache). Use `--no-semantic` for keyword-only (~3ms). Embeddings are cached in memory (mtime-based invalidation) so repeated recalls avoid re-parsing the 24MB embeddings file. Semantic search provides deeper matching beyond exact title/tag keywords - useful for finding conceptually related memories even when different terminology was used.
## MCP Server
Cognitive Memory v3.0 includes a native MCP server for direct Claude Code tool integration. Instead of CLI calls via Bash, Claude Code can call memory tools directly.
**Registration:** Configured in `~/.claude.json` under `mcpServers.cognitive-memory`.
**Available tools:** `memory_store`, `memory_recall`, `memory_get`, `memory_search`, `memory_relate`, `memory_related`, `memory_edge_get`, `memory_edge_search`, `memory_reflect`, `memory_reflection`, `memory_stats`, `memory_episode`, `memory_tags_list`, `memory_tags_related`, `memory_embed`, `memory_core`, `memory_decay`, `memory_config`
## Rich Edges
Relationships between memories are now first-class objects with their own markdown files in `graph/edges/`. Each edge has:
- Full YAML frontmatter (id, type, from/to IDs and titles, strength, timestamps)
- Description body explaining the relationship in detail
- Backward-compatible: `edge_id` field added to memory relation entries for fast BFS traversal
**Edge CLI commands:** `edge-get`, `edge-search`, `edge-update`, `edge-delete`
**Edge file format:** `{from-slug}--{TYPE}--{to-slug}-{6char}.md`
## Embedding Providers
Supports multiple embedding providers with automatic fallback:
- **Ollama** (default): Local, free, uses `nomic-embed-text` (768 dimensions)
- **OpenAI** (optional): Higher quality, uses `text-embedding-3-small` (1536 dimensions)
Configure with: `claude-memory config --provider openai --openai-key "sk-..."`
View config: `claude-memory config --show`
Provider changes trigger automatic re-embedding (dimension mismatch safety).
Config stored in `_config.json` (gitignored, may contain API key).
## Multi-Graph
Graphs are named, isolated memory namespaces. Each graph has its own index, state, embeddings, episodes, edges, CORE.md, and REFLECTION.md. Use them to separate unrelated domains (e.g., work vs. personal, per-project isolation).
### Storage Layout
```
~/.local/share/
├── cognitive-memory/ # "default" graph (always exists)
│ ├── graph/, episodes/, vault/
│ ├── _index.json, _state.json, _embeddings.json, _config.json
│ ├── CORE.md, REFLECTION.md
│ └── ...
└── cognitive-memory-graphs/ # Named graphs live here as siblings
├── work/ # Graph "work"
│ ├── graph/, episodes/, vault/
│ ├── _index.json, _state.json, ...
│ └── ...
└── research/ # Graph "research"
└── ...
```
### Creating a Graph
Use `graph-create` to set up a new graph with the full directory structure:
```bash
# Convention path (~/.local/share/cognitive-memory-graphs/work/)
claude-memory graph-create work
# Custom path (registered in default graph's _config.json automatically)
claude-memory graph-create work --path /mnt/data/work-memories
```
Graphs are also auto-created on first use — storing a memory with `graph=<name>` creates the directory structure automatically:
```bash
# CLI: use --graph before the subcommand
claude-memory --graph work store --type decision \
--title "Chose Postgres" --content "..." --tags "db,arch"
# MCP: pass graph parameter on any tool
# memory_store(type="decision", title="...", content="...", graph="work")
```
### Using Graphs
Every CLI command and MCP tool accepts a graph parameter:
```bash
# CLI
claude-memory --graph work recall "authentication"
claude-memory --graph work stats
claude-memory --graph work embed
# List all graphs (default + configured + discovered on disk)
claude-memory graphs
# MCP: memory_graphs()
```
### Per-Project Graph Routing
Set `COGNITIVE_MEMORY_GRAPH` to automatically route all memory operations to a specific graph without passing `graph=` on every call. Configure it per-project in Claude Code's settings:
```json
// .claude/settings.json (in your project root)
{
"env": {
"COGNITIVE_MEMORY_GRAPH": "paper-dynasty"
}
}
```
Resolution order: explicit `graph` parameter > `COGNITIVE_MEMORY_GRAPH` env var > `"default"`.
This means a project can set its default graph, but individual calls can still override it with an explicit `graph` parameter when needed.
### Graph Isolation
- Each graph has **independent** index, state, embeddings, decay scores, episodes, and edges
- Edges can only connect memories **within the same graph** — cross-graph edges are not supported
- Embedding configuration (`_config.json`) is per-graph — each graph can use a different provider
- The graph registry (custom path mappings) lives in the **default** graph's `_config.json`
### Automated Maintenance
- **Systemd timers** (decay, core, embed, reflect) run against all graphs via `maintain-all-graphs.sh`
- **Git sync** (`memory-git-sync.sh`) syncs the default graph and any named graphs that are git repos
- **`edge-proposer.py`** and **`session_memory.py`** accept `--graph` to target a specific graph
## Episode Logging
Daily markdown files appended during sessions, providing chronological context:
```markdown
# 2025-12-13
## 10:30 - Fixed Discord bot reconnection
- **Type:** fix
- **Tags:** major-domo, discord, python
- **Summary:** Implemented exponential backoff for reconnections
```
## Migration from MemoryGraph
This system was migrated from MemoryGraph (SQLite-based). The original database is archived at `~/.memorygraph/memory.db.archive`. All 313 memories and 30 relationships were preserved.
## Proactive Usage
This skill should be used proactively when:
1. **Bug fixed** - Store problem + solution (use `--episode` for auto-logging)
2. **Git commit made** - Store summary of what was fixed/added
3. **Architecture decision** - Store choice + rationale
4. **Pattern discovered** - Store for future recall
5. **Configuration worked** - Store what worked and why
6. **Troubleshooting complete** - Store what was tried, what worked
7. **Session start** - CORE.md auto-loads via MEMORY.md symlinks; read REFLECTION.md for theme context, then recall relevant memories (semantic is on by default)
8. **Multi-step workflow documented** - Use `procedure` type with structured steps/preconditions/postconditions
9. **Periodically** - Run `reflect` to cluster memories and surface cross-cutting insights. Run `tags suggest` to find missing tag connections
10. **After adding memories** - Run `embed` to refresh semantic search index
11. **Related memories exist** - Create edges to connect them. After storing a new memory, check if it relates to existing ones (SOLVES a problem, BUILDS_ON a prior solution, is ALTERNATIVE_TO another approach, etc.). Well-connected memories produce richer recall and graph traversal for RAG retrieval.
12. **Session ending** - Prompt: "Should I store today's learnings?"
---
**Skill**: `~/.claude/skills/cognitive-memory/`
**App**: `/mnt/NV2/Development/cognitive-memory/`
**Data**: `$XDG_DATA_HOME/cognitive-memory/` (default: `~/.local/share/cognitive-memory/`)
**Version**: 3.1.0
**Created**: 2026-02-13
**Migrated from**: MemoryGraph (SQLite)

View File

@ -47,7 +47,6 @@ mkdir -p ~/.local/share/claude-scheduled/logs/<task-name>
Write a clear, structured prompt that tells Claude exactly what to do. Include:
- Specific instructions (repos to check, files to read, etc.)
- Desired output format (structured text or JSON)
- Any cognitive-memory operations (recall context, store results)
**Guidelines:**
- Be explicit — headless Claude has no user to ask for clarification
@ -62,7 +61,6 @@ Write a clear, structured prompt that tells Claude exactly what to do. Include:
"effort": "medium",
"max_budget_usd": 0.75,
"allowed_tools": "<space-separated tool list>",
"graph": "default",
"working_dir": "/mnt/NV2/Development/claude-home",
"timeout_seconds": 300
}
@ -76,27 +74,16 @@ Write a clear, structured prompt that tells Claude exactly what to do. Include:
| `effort` | `medium` | `low`, `medium`, or `high`. Controls reasoning depth. |
| `max_budget_usd` | `0.25` | Per-session cost ceiling. Typical triage run: ~$0.20. |
| `allowed_tools` | `Read(*) Glob(*) Grep(*)` | Space-separated tool allowlist. Principle of least privilege. |
| `graph` | `default` | Cognitive-memory graph for storing results. |
| `working_dir` | `/mnt/NV2/Development/claude-home` | `cd` here before running. Loads that project's CLAUDE.md. |
| `timeout_seconds` | `300` | Hard timeout. 300s (5 min) is usually sufficient. |
**Common tool allowlists by task type:**
Read-only triage (Gitea + memory):
```
mcp__gitea-mcp__list_repo_issues mcp__gitea-mcp__get_issue_by_index mcp__gitea-mcp__list_repo_labels mcp__gitea-mcp__list_repo_pull_requests mcp__cognitive-memory__memory_recall mcp__cognitive-memory__memory_search mcp__cognitive-memory__memory_store mcp__cognitive-memory__memory_episode
```
Code analysis (read-only):
```
Read(*) Glob(*) Grep(*)
```
Memory maintenance:
```
mcp__cognitive-memory__memory_recall mcp__cognitive-memory__memory_search mcp__cognitive-memory__memory_store mcp__cognitive-memory__memory_relate mcp__cognitive-memory__memory_reflect mcp__cognitive-memory__memory_episode
```
### Step 4: Write MCP config (`mcp.json`) — if needed
Only include MCP servers the task actually needs. Use `--strict-mcp-config` (runner does this automatically when mcp.json exists).
@ -117,18 +104,6 @@ Gitea:
}
```
Cognitive Memory:
```json
{
"cognitive-memory": {
"command": "python3",
"type": "stdio",
"args": ["/mnt/NV2/Development/cognitive-memory/mcp_server.py"],
"env": {}
}
}
```
n8n:
```json
{
@ -236,8 +211,7 @@ ls -lt ~/.local/share/claude-scheduled/logs/<task-name>/
3. Invokes `claude -p` with `--strict-mcp-config`, `--allowedTools`, `--no-session-persistence`, `--output-format json`
4. Unsets `CLAUDECODE` env var to allow nested sessions
5. Logs full output to `~/.local/share/claude-scheduled/logs/<task>/`
6. Stores a summary to cognitive-memory as a workflow + episode
7. Rotates logs (keeps last 30 per task)
6. Rotates logs (keeps last 30 per task)
**The runner does NOT need modification to add new tasks** — just add files under `tasks/` and a timer.

View File

@ -34,7 +34,6 @@ Or just ask:
| MCP | Category | Description | Est. Tokens |
|-----|----------|-------------|-------------|
| `cognitive-memory` | Memory | Cognitive memory with decay scoring | Always loaded |
| `n8n-mcp` | Automation | n8n workflow management | ~1500 |
| `playwright` | Automation | Browser automation and testing | ~1000 |

View File

@ -25,14 +25,6 @@ When user makes a request, analyze the task to determine if any MCPs would be us
### MCP Registry
```json
{
"cognitive-memory": {
"type": "stdio",
"category": "memory",
"description": "Cognitive memory system with decay scoring, episodic logging, and graph relationships",
"triggers": ["memory", "recall", "remember", "cognitive"],
"command": "/home/cal/.claude/skills/cognitive-memory/mcp-server/cognitive-memory-mcp",
"alwaysLoaded": true
},
"n8n-mcp": {
"type": "stdio",
"category": "automation",
@ -107,13 +99,13 @@ Would you like me to unload n8n-mcp to free up context? (yes/no)
### Where Claude Code Actually Reads MCP Config
Claude Code reads MCP server definitions from **two** locations:
1. **Global**: `~/.claude.json` → top-level `mcpServers` key (always-on MCPs like `cognitive-memory`, `gitea-mcp`, `n8n-mcp`, `tui-driver`)
1. **Global**: `~/.claude.json` → top-level `mcpServers` key (always-on MCPs like `gitea-mcp`, `n8n-mcp`, `tui-driver`)
2. **Project**: `<project-root>/.mcp.json``mcpServers` key (on-demand MCPs)
**IMPORTANT:** `~/.claude/.mcp.json` is NOT read by Claude Code. Global servers go in `~/.claude.json`. The mcp-manager operates on the project-level `.mcp.json` (auto-detected via git root) for on-demand servers.
### Locations
- **Global Config**: `~/.claude.json` → always-on MCPs (cognitive-memory)
- **Global Config**: `~/.claude.json` → always-on MCPs
- **Project Config**: `<project-root>/.mcp.json` → on-demand MCPs (n8n-mcp, playwright, etc.)
- **Full Registry**: `~/.claude/.mcp-full.json` (all available MCP definitions with credentials)
- **Backup**: `<project-root>/.mcp.json.backup` (auto-created before changes)
@ -241,7 +233,6 @@ python3 ~/.claude/skills/mcp-manager/mcp_control.py reset
- **Context Savings**: Each unused MCP saves ~200-1000 tokens depending on complexity
- **Credential Safety**: Never modify or expose API keys/tokens
- **Minimal Default**: Start each session with minimal MCPs, load on-demand
- **Smart Defaults**: `cognitive-memory` stays loaded by default; all others are on-demand
---

View File

@ -7,12 +7,6 @@ description: Paper Dynasty baseball card game management. USE WHEN user mentions
**SCOPE**: Only use in paper-dynasty, paper-dynasty-database repos. Do not activate in unrelated projects.
## First Step: Recall Cognitive Memory
**Before ANY Paper Dynasty operation**, use the cognitive-memory MCP:
```
memory_recall query="paper-dynasty" scope="recent"
```
---

76
skills/save-doc/SKILL.md Normal file
View File

@ -0,0 +1,76 @@
---
allowed-tools: Read,Write,Edit,Glob,Grep,Bash
description: Save documentation to the knowledge base
user-invocable: true
---
Save learnings, fixes, release notes, and other documentation to the claude-home knowledge base. Files are auto-committed and pushed by the `sync-kb` systemd timer (every 2 hours), which triggers kb-rag reindexing.
## Frontmatter Template
Every `.md` file MUST have this YAML frontmatter to be indexed:
```yaml
---
title: "Short descriptive title"
description: "One-sentence summary — used for search ranking, so be specific."
type: <type>
domain: <domain>
tags: [tag1, tag2, tag3]
---
```
## Valid Values
**type** (required): `reference`, `troubleshooting`, `guide`, `context`, `runbook`
**domain** (required — matches repo directory):
| Domain | Directory | Use for |
|--------|-----------|---------|
| `networking` | `networking/` | DNS, Pi-hole, firewall, SSL, nginx, SSH |
| `docker` | `docker/` | Container configs, compose patterns |
| `vm-management` | `vm-management/` | Proxmox, KVM, LXC |
| `tdarr` | `tdarr/` | Transcoding, ffmpeg, nvenc |
| `media-servers` | `media-servers/` | Jellyfin, Plex, watchstate |
| `media-tools` | `media-tools/` | yt-dlp, Playwright, scraping |
| `monitoring` | `monitoring/` | Uptime Kuma, alerts, health checks |
| `productivity` | `productivity/` | n8n, automation, Ko-fi |
| `gaming` | `gaming/` | Steam, Proton, STL |
| `databases` | `databases/` | PostgreSQL, Redis |
| `backups` | `backups/` | Restic, snapshots, retention |
| `server-configs` | `server-configs/` | Gitea, infrastructure |
| `workstation` | `workstation/` | Dotfiles, fish, tmux, zed |
| `development` | `development/` | Dev tooling, CI, testing |
| `scheduled-tasks` | `scheduled-tasks/` | Systemd timers, Claude automation |
| `paper-dynasty` | `paper-dynasty/` | Card game project docs |
| `major-domo` | `major-domo/` | Discord bot project docs |
| `tabletop` | `tabletop/` | Tabletop gaming |
| `tcg` | `tcg/` | Trading card games |
**tags**: Free-form, lowercase, hyphenated. Reuse existing tags when possible.
## File Naming
- Lowercase, hyphenated: `pihole-dns-timeout-fix.md`
- Release notes: `release-YYYY.M.DD.md` or `database-release-YYYY.M.DD.md`
- Troubleshooting additions: append to existing `{domain}/troubleshooting.md` when possible
## Where to Save
Save to `/mnt/NV2/Development/claude-home/{domain}/`. The file will be auto-committed and pushed by the `sync-kb` timer, triggering kb-rag reindexing.
## Workflow
1. Identify what's worth documenting (fix, decision, config, incident, release)
2. Check if an existing doc should be updated instead (`kb-search` or `Glob`)
3. Write the file with proper frontmatter to the correct directory
4. Confirm to the user what was saved and where
## Examples
See `examples/` in this skill directory for templates of each document type:
- `examples/troubleshooting.md` — Bug fix / incident resolution
- `examples/release-notes.md` — Deployment / release changelog
- `examples/guide.md` — How-to / setup guide
- `examples/runbook.md` — Operational procedure

View File

@ -0,0 +1,43 @@
---
title: "Paper Dynasty Dev Server Guide"
description: "Setup guide for Paper Dynasty local development server with Docker Compose, database seeding, and hot-reload configuration."
type: guide
domain: development
tags: [paper-dynasty, docker, development, setup]
---
# Paper Dynasty Dev Server Guide
## Prerequisites
- Docker with Compose v2
- Git access to `cal/paper-dynasty` and `cal/paper-dynasty-database`
- `.env` file from the project wiki or another dev
## Quick Start
```bash
cd /mnt/NV2/Development/paper-dynasty
cp .env.example .env # then fill in DB creds and Discord token
docker compose -f docker-compose.dev.yml up -d
```
## Services
| Service | Port | Purpose |
|---------|------|---------|
| `api` | 8080 | FastAPI backend (hot-reload enabled) |
| `db` | 5432 | PostgreSQL 16 |
| `bot` | — | Discord bot (connects to dev guild) |
## Database Seeding
```bash
docker compose exec api python -m scripts.seed_dev_data
```
This creates 5 test players with pre-built collections for testing trades and gauntlet.
## Common Issues
- **Bot won't connect**: Check `DISCORD_TOKEN` in `.env` points to the dev bot, not prod
- **DB connection refused**: Wait 10s for postgres healthcheck, or `docker compose restart db`

View File

@ -0,0 +1,32 @@
---
title: "Major Domo v2 Release — 2026.3.17"
description: "Release notes for Major Domo v2 Discord bot deployment on 2026-03-17 including stat corrections, new commands, and dependency updates."
type: reference
domain: major-domo
tags: [major-domo, deployment, release-notes, discord]
---
# Major Domo v2 Release — 2026.3.17
**Date:** 2026-03-17
**Deployed to:** production (sba-bot)
**PR(s):** #84, #85, #87
## Changes
### New Features
- `/standings` command now shows division leaders with magic numbers
- Added `!weather` text command for game-day weather lookups
### Bug Fixes
- Fixed roster sync skipping players with apostrophes in names
- Corrected OBP calculation to exclude sacrifice flies from denominator
### Dependencies
- Upgraded discord.py to 2.5.1 (fixes voice channel memory leak)
- Pinned SQLAlchemy to 2.0.36 (regression in .37)
## Deployment Notes
- Required database migration: `alembic upgrade head` (added `weather_cache` table)
- No config changes needed
- Rollback: revert to image tag `2026.3.16` if issues arise

View File

@ -0,0 +1,48 @@
---
title: "KB-RAG Reindex Runbook"
description: "Operational runbook for manual and emergency reindexing of the claude-home knowledge base on manticore."
type: runbook
domain: development
tags: [kb-rag, qdrant, manticore, operations]
---
# KB-RAG Reindex Runbook
## When to Use
- Search returns stale or missing results after a push
- After bulk file additions or directory restructuring
- After recovering from container crash or volume issue
## Incremental Reindex (normal)
Only re-embeds files whose content hash changed. Fast (~1-5 seconds).
```bash
ssh manticore "docker exec md-kb-rag-kb-rag-1 md-kb-rag index"
```
## Full Reindex (nuclear option)
Clears the state DB and re-embeds everything. Slow (~2-3 minutes for 150+ files).
```bash
ssh manticore "docker exec md-kb-rag-kb-rag-1 md-kb-rag index --full"
```
## Verify
```bash
# Check health
ssh manticore "docker exec md-kb-rag-kb-rag-1 md-kb-rag health"
# Check indexed file count and Qdrant point count
ssh manticore "docker exec md-kb-rag-kb-rag-1 md-kb-rag status"
# Check for validation errors in recent logs
ssh manticore "docker logs md-kb-rag-kb-rag-1 --tail 30 2>&1 | grep WARN"
```
## Escalation
- If Qdrant won't start: check disk space on manticore (`df -h`)
- If embeddings OOM: check GPU memory (`ssh manticore "nvidia-smi"`)
- Full stack restart: `ssh manticore "cd ~/docker/md-kb-rag && docker compose down && docker compose up -d"`

View File

@ -0,0 +1,33 @@
---
title: "Fix: Scout Token Purchase Not Deducting Currency"
description: "Scout token buy flow silently failed to deduct 200₼ due to using db_patch instead of the dedicated money endpoint."
type: troubleshooting
domain: development
tags: [paper-dynasty, discord, api, bug-fix]
---
# Fix: Scout Token Purchase Not Deducting Currency
**Date:** 2026-03-15
**PR:** #90
**Severity:** High — players getting free tokens
## Problem
The `/buy scout-token` command completed successfully but didn't deduct the 200₼ cost. Players could buy unlimited tokens.
## Root Cause
The buy handler used `db_patch('/players/{id}', {'scout_tokens': new_count})` to increment tokens, but this endpoint doesn't trigger the money deduction side-effect. The dedicated `/players/{id}/money` endpoint handles balance validation and atomic deduction.
## Fix
Replaced the `db_patch` call with a two-step flow:
1. `POST /players/{id}/money` with `{"amount": -200, "reason": "scout_token_purchase"}`
2. Only increment `scout_tokens` if the money call succeeds
## Lessons
- Always use dedicated money endpoints for currency operations — never raw patches
- The `db_patch` helper bypasses business logic by design (it's for admin corrections)
- Added integration test covering the full buy→deduct→verify flow