feat: cognitive-memory multi-graph support, XDG data paths, remove CORE.md auto-load

- Multi-graph: named graph routing in MCP server (graph param on all tools),
  CLI --graph flag, graphs subcommand, resolve_graph_path() in common.py
- XDG compliance: data dir resolves via COGNITIVE_MEMORY_DIR env > XDG_DATA_HOME > ~/.local/share/
- Remove CORE.md auto-loading: drop MEMORY.md symlinks, update CLAUDE.md to MCP-first recall
- Update all scripts (git-sync, ensure-symlinks, edge-proposer) for portable path resolution
- Remove symlinks step from daily systemd service
- Version bump to 3.1.0

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Cal Corum 2026-02-28 14:55:12 -06:00
parent ba0562a8ba
commit e484b2ab35
15 changed files with 263 additions and 44 deletions

View File

@ -34,10 +34,10 @@ Automatic loads are NOT enough — Read loads required CLAUDE.md context along t
- Fallback for unlisted homelab hosts: `ssh 10.10.0.x` (wildcard rule handles key/user)
## Memory Protocol (Cognitive Memory)
- Skill: `~/.claude/skills/cognitive-memory/` | Data: `~/.claude/memory/`
- CORE.md auto-loads via MEMORY.md symlinks (no manual loading needed)
- Session start: Read `~/.claude/memory/REFLECTION.md` for theme context
- Auto-store on: bug fixes, git commits (mandatory, --episode), architecture decisions, patterns, configs
- Skill: `~/.claude/skills/cognitive-memory/` | Data: `~/.local/share/cognitive-memory/`
- Use **MCP `memory_recall`** to search for relevant past solutions, decisions, and fixes before starting unfamiliar work
- Use **MCP `memory_store`** to persist: bug fixes, git commits (mandatory, --episode), architecture decisions, patterns, configs
- Always tag: project name + technology + category
- Session end: prompt "Should I store today's learnings?"
- `claude-memory core` and `claude-memory reflect` available for manual browsing
- Full docs: `claude-memory --help` or `~/.claude/skills/cognitive-memory/SKILL.md`

View File

@ -1 +0,0 @@
{"claude.ai Google Calendar":{"timestamp":1772171669455},"claude.ai Gmail":{"timestamp":1772171669461}}

View File

@ -1,5 +1,5 @@
{
"fetchedAt": "2026-02-27T05:37:59.054Z",
"fetchedAt": "2026-02-28T18:10:38.336Z",
"plugins": [
{
"plugin": "code-review@claude-plugins-official",

View File

@ -13,6 +13,6 @@
"repo": "anthropics/claude-code"
},
"installLocation": "/home/cal/.claude/plugins/marketplaces/claude-code-plugins",
"lastUpdated": "2026-02-27T05:41:10.127Z"
"lastUpdated": "2026-02-28T18:11:46.614Z"
}
}

@ -1 +1 @@
Subproject commit 1f48d799b9ef25a5c9cec36dd9a4c6f409b722f6
Subproject commit cd4956871a56b07d9d2a6cb538fae109d04c9d57

View File

@ -125,8 +125,11 @@ claude-memory tags suggest <memory_id>
## Data Directory Structure
Data lives at `$XDG_DATA_HOME/cognitive-memory/` (default: `~/.local/share/cognitive-memory/`).
Override with `COGNITIVE_MEMORY_DIR` env var. Named graphs live as siblings (see Multi-Graph below).
```
~/.claude/memory/
~/.local/share/cognitive-memory/
├── CORE.md # Auto-curated ~3K token summary
├── REFLECTION.md # Theme analysis & cross-project patterns
├── graph/ # Semantic memories (knowledge graph)
@ -311,7 +314,7 @@ claude-memory search --min-importance 0.3
## CORE.md
Auto-generated summary of highest-relevance memories (~1K tokens), auto-loaded into the system prompt at every session via MEMORY.md symlinks. Each project's `~/.claude/projects/<project>/memory/MEMORY.md` symlinks to `~/.claude/memory/CORE.md`. Symlinks are refreshed daily by `cognitive-memory-daily.service` (via `claude-memory-symlinks` script). Regenerated by the `core` CLI command.
Auto-generated summary of highest-relevance memories (~1K tokens), auto-loaded into the system prompt at every session via MEMORY.md symlinks. Each project's `~/.claude/projects/<project>/memory/MEMORY.md` symlinks to `CORE.md` in the data directory. Symlinks are refreshed daily by `cognitive-memory-daily.service` (via `claude-memory-symlinks` script). Regenerated by the `core` CLI command.
## REFLECTION.md
@ -388,7 +391,7 @@ This skill should be used proactively when:
---
**Location**: `~/.claude/skills/cognitive-memory/`
**Data**: `~/.claude/memory/`
**Version**: 3.0.0
**Data**: `$XDG_DATA_HOME/cognitive-memory/` (default: `~/.local/share/cognitive-memory/`)
**Version**: 3.1.0
**Created**: 2026-02-13
**Migrated from**: MemoryGraph (SQLite)

View File

@ -660,7 +660,7 @@ class AnalysisMixin:
def reflection_summary(self) -> str:
"""Generate REFLECTION.md with patterns and insights from the memory system.
Writes to ~/.claude/memory/REFLECTION.md and git-commits the file.
Writes to REFLECTION.md in the memory data directory and git-commits the file.
Returns the generated content.
"""
index = self._load_index()

View File

@ -15,6 +15,8 @@ from common import (
VALID_RELATION_TYPES,
VALID_TYPES,
_load_memory_config,
resolve_graph_path,
list_graphs,
)
@ -23,6 +25,11 @@ def main():
description="Cognitive Memory - Markdown-based memory system with decay scoring",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"--graph",
default=None,
help="Named memory graph to use (default: 'default')",
)
subparsers = parser.add_subparsers(dest="command", help="Commands")
# store
@ -214,13 +221,17 @@ def main():
"--ollama-model", help="Set Ollama model name (e.g. qwen3-embedding:8b)"
)
# graphs
subparsers.add_parser("graphs", help="List available memory graphs")
args = parser.parse_args()
if not args.command:
parser.print_help()
sys.exit(1)
client = CognitiveMemoryClient()
graph_path = resolve_graph_path(args.graph)
client = CognitiveMemoryClient(memory_dir=graph_path)
result = None
if args.command == "store":
@ -357,7 +368,7 @@ def main():
# Print path, not content (content is written to file)
result = {
"success": True,
"path": str(MEMORY_DIR / "CORE.md"),
"path": str(client.memory_dir / "CORE.md"),
"chars": len(content),
}
@ -406,7 +417,7 @@ def main():
content = client.reflection_summary()
result = {
"success": True,
"path": str(MEMORY_DIR / "REFLECTION.md"),
"path": str(client.memory_dir / "REFLECTION.md"),
"chars": len(content),
}
@ -453,8 +464,11 @@ def main():
)
result = {"success": True, "memory_id": memory_id}
elif args.command == "graphs":
result = list_graphs()
elif args.command == "config":
config_path = MEMORY_DIR / "_config.json"
config_path = client.memory_dir / "_config.json"
config = _load_memory_config(config_path)
changed = False

View File

@ -7,6 +7,7 @@ embedding helpers, and cosine similarity. Shared by all other modules.
import json
import math
import os
import re
import urllib.request
from datetime import datetime, timezone
@ -18,7 +19,19 @@ from urllib.error import URLError
# CONSTANTS
# =============================================================================
MEMORY_DIR = Path.home() / ".claude" / "memory"
# Data directory resolution order:
# 1. COGNITIVE_MEMORY_DIR env var (explicit override)
# 2. XDG_DATA_HOME/cognitive-memory/ (Linux standard)
# 3. ~/.local/share/cognitive-memory/ (XDG default)
_env_dir = os.environ.get("COGNITIVE_MEMORY_DIR", "")
if _env_dir:
MEMORY_DIR = Path(_env_dir).expanduser()
else:
_xdg_data = os.environ.get("XDG_DATA_HOME", "") or str(
Path.home() / ".local" / "share"
)
MEMORY_DIR = Path(_xdg_data) / "cognitive-memory"
INDEX_PATH = MEMORY_DIR / "_index.json"
STATE_PATH = MEMORY_DIR / "_state.json"
EMBEDDINGS_PATH = MEMORY_DIR / "_embeddings.json"
@ -112,6 +125,8 @@ FIELD_ORDER = [
# CORE.md token budget (approximate, 1 token ~= 4 chars)
CORE_MAX_CHARS = 12000 # ~3K tokens
GRAPHS_BASE_DIR = MEMORY_DIR.parent / "cognitive-memory-graphs"
# =============================================================================
# YAML FRONTMATTER PARSING (stdlib only)
@ -519,3 +534,48 @@ def serialize_edge_frontmatter(data: Dict[str, Any]) -> str:
lines.append(f"{key}: {_format_yaml_value(value)}")
lines.append("---")
return "\n".join(lines)
def load_graph_config(config_path: Optional[Path] = None) -> Dict[str, Dict[str, Any]]:
"""Load named graphs config from _config.json 'graphs' key."""
cfg = _load_memory_config(config_path)
return cfg.get("graphs", {})
def resolve_graph_path(
graph_name: Optional[str], config_path: Optional[Path] = None
) -> Path:
"""Resolve graph name to directory path. None/'default' → MEMORY_DIR."""
if not graph_name or graph_name == "default":
return MEMORY_DIR
graphs = load_graph_config(config_path)
if graph_name in graphs:
p = graphs[graph_name].get("path", "")
if p:
return Path(p).expanduser()
# Convention: sibling of MEMORY_DIR
return GRAPHS_BASE_DIR / graph_name
def list_graphs(config_path: Optional[Path] = None) -> List[Dict[str, Any]]:
"""List all known graphs: default + configured + discovered on disk."""
result = [{"name": "default", "path": str(MEMORY_DIR)}]
seen = {"default"}
# From config
graphs = load_graph_config(config_path)
for name, cfg in graphs.items():
if name not in seen:
p = cfg.get("path", "")
path = str(Path(p).expanduser()) if p else str(GRAPHS_BASE_DIR / name)
result.append({"name": name, "path": path})
seen.add(name)
# Discover on disk
if GRAPHS_BASE_DIR.exists():
for d in sorted(GRAPHS_BASE_DIR.iterdir()):
if d.is_dir() and d.name not in seen:
result.append({"name": d.name, "path": str(d)})
seen.add(d.name)
return result

View File

@ -1,6 +1,6 @@
{
"name": "cognitive-memory",
"version": "3.0.0",
"version": "3.1.0",
"description": "Markdown-based memory system with decay scoring, episodic logging, semantic search, reflection cycles, auto-curated CORE.md, native MCP server integration, rich edge files, and hybrid Ollama/OpenAI embeddings",
"created": "2026-02-13",
"migrated_from": "memorygraph",
@ -16,7 +16,7 @@
"dev/migrate.py": "One-time migration from MemoryGraph SQLite",
"dev/PROJECT_PLAN.json": "Development roadmap and task tracking"
},
"data_location": "~/.claude/memory/",
"data_location": "$XDG_DATA_HOME/cognitive-memory/ (default: ~/.local/share/cognitive-memory/)",
"dependencies": "stdlib-only (no external packages; Ollama optional for semantic search)",
"features": [
"store: Create markdown memory files with YAML frontmatter (--episode flag)",

View File

@ -10,15 +10,27 @@ import json
import subprocess
import sys
from pathlib import Path
from typing import Any, Dict
from typing import Any, Dict, Optional
# Allow imports from this directory (client.py lives here)
sys.path.insert(0, str(Path(__file__).parent))
from client import CognitiveMemoryClient, _load_memory_config, MEMORY_DIR
from common import resolve_graph_path, list_graphs
SYNC_SCRIPT = Path(__file__).parent / "scripts" / "memory-git-sync.sh"
_clients: Dict[str, CognitiveMemoryClient] = {}
def get_client(graph: Optional[str] = None) -> CognitiveMemoryClient:
"""Get or create a CognitiveMemoryClient for the given graph."""
key = graph or "default"
if key not in _clients:
path = resolve_graph_path(graph)
_clients[key] = CognitiveMemoryClient(memory_dir=path)
return _clients[key]
def _trigger_git_sync():
"""Fire-and-forget git sync after a write operation."""
@ -73,6 +85,10 @@ def create_tools() -> list:
"type": "boolean",
"description": "Also log an episode entry for this memory (default true)",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["type", "title", "content"],
},
@ -98,6 +114,10 @@ def create_tools() -> list:
"type": "integer",
"description": "Maximum number of results to return (default 10)",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["query"],
},
@ -114,7 +134,11 @@ def create_tools() -> list:
"memory_id": {
"type": "string",
"description": "UUID of the memory to retrieve",
}
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["memory_id"],
},
@ -147,6 +171,10 @@ def create_tools() -> list:
"type": "number",
"description": "Minimum importance score (0.0 to 1.0)",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
@ -180,6 +208,10 @@ def create_tools() -> list:
"type": "number",
"description": "Relationship strength from 0.0 to 1.0 (default 0.8)",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["from_id", "to_id", "rel_type"],
},
@ -206,6 +238,10 @@ def create_tools() -> list:
"type": "integer",
"description": "Maximum traversal depth (1-5, default 1)",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["memory_id"],
},
@ -221,7 +257,11 @@ def create_tools() -> list:
"edge_id": {
"type": "string",
"description": "UUID of the edge to retrieve",
}
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["edge_id"],
},
@ -252,6 +292,10 @@ def create_tools() -> list:
"type": "string",
"description": "Filter to edges pointing at this memory UUID",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
@ -274,6 +318,10 @@ def create_tools() -> list:
"type": "boolean",
"description": "If true, return analysis without persisting state changes (default false)",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
@ -283,7 +331,15 @@ def create_tools() -> list:
"Return the current REFLECTION.md summary — the auto-curated narrative of recent "
"memory themes, clusters, and activity. Use this to quickly orient at session start."
),
"inputSchema": {"type": "object", "properties": {}},
"inputSchema": {
"type": "object",
"properties": {
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
{
"name": "memory_stats",
@ -291,12 +347,20 @@ def create_tools() -> list:
"Return statistics about the memory system: total count, breakdown by type, "
"relation count, decay distribution, embeddings count, and per-directory file counts."
),
"inputSchema": {"type": "object", "properties": {}},
"inputSchema": {
"type": "object",
"properties": {
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
{
"name": "memory_episode",
"description": (
"Append a timestamped entry to today's episode log file (~/.claude/memory/episodes/YYYY-MM-DD.md). "
"Append a timestamped entry to today's episode log file (episodes/YYYY-MM-DD.md in the active graph). "
"Use this to record significant session events, commits, or decisions without creating full memories."
),
"inputSchema": {
@ -319,6 +383,10 @@ def create_tools() -> list:
"type": "string",
"description": "Optional summary text for the entry",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["type", "title"],
},
@ -335,7 +403,11 @@ def create_tools() -> list:
"limit": {
"type": "integer",
"description": "Maximum number of tags to return (0 = unlimited, default 0)",
}
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
@ -356,6 +428,10 @@ def create_tools() -> list:
"type": "integer",
"description": "Maximum number of related tags to return (0 = unlimited, default 0)",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
"required": ["tag"],
},
@ -367,7 +443,15 @@ def create_tools() -> list:
"Requires either Ollama (nomic-embed-text model) or an OpenAI API key configured. "
"Embeddings enable semantic recall via memory_recall with semantic=true."
),
"inputSchema": {"type": "object", "properties": {}},
"inputSchema": {
"type": "object",
"properties": {
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
{
"name": "memory_core",
@ -375,7 +459,15 @@ def create_tools() -> list:
"Return the current CORE.md content — the auto-curated high-priority memory digest "
"used to seed Claude sessions. Lists critical solutions, active decisions, and key fixes."
),
"inputSchema": {"type": "object", "properties": {}},
"inputSchema": {
"type": "object",
"properties": {
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
{
"name": "memory_decay",
@ -384,7 +476,15 @@ def create_tools() -> list:
"access frequency, importance, and type weight. Archives memories whose score "
"drops below the dormant threshold. Returns a summary of updated scores."
),
"inputSchema": {"type": "object", "properties": {}},
"inputSchema": {
"type": "object",
"properties": {
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
{
"name": "memory_config",
@ -409,15 +509,25 @@ def create_tools() -> list:
"type": "string",
"description": "OpenAI API key to store in config",
},
"graph": {
"type": "string",
"description": "Named memory graph to use (default: 'default')",
},
},
},
},
{
"name": "memory_graphs",
"description": (
"List all available memory graphs (named, segregated memory namespaces). "
"Returns each graph's name, path, and whether it exists on disk."
),
"inputSchema": {"type": "object", "properties": {}},
},
]
def handle_tool_call(
tool_name: str, arguments: Dict[str, Any], client: CognitiveMemoryClient
) -> Dict[str, Any]:
def handle_tool_call(tool_name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
"""Dispatch MCP tool calls to the CognitiveMemoryClient."""
def ok(result: Any) -> Dict[str, Any]:
@ -428,6 +538,9 @@ def handle_tool_call(
}
try:
graph = arguments.pop("graph", None)
client = get_client(graph)
if tool_name == "memory_store":
mem_type = arguments["type"]
title = arguments["title"]
@ -564,7 +677,7 @@ def handle_tool_call(
return ok(result)
elif tool_name == "memory_config":
config_path = MEMORY_DIR / "_config.json"
config_path = client.memory_dir / "_config.json"
config = _load_memory_config(config_path)
changed = False
@ -589,6 +702,24 @@ def handle_tool_call(
display["openai_api_key"] = key[:4] + "..." + key[-4:]
return ok(display)
elif tool_name == "memory_graphs":
graphs = list_graphs()
# Enrich with existence check and memory count
for g in graphs:
p = Path(g["path"])
g["exists"] = p.exists()
if g["exists"]:
index_path = p / "_index.json"
if index_path.exists():
try:
idx = json.loads(index_path.read_text())
g["memory_count"] = len(idx.get("entries", {}))
except (json.JSONDecodeError, OSError):
g["memory_count"] = 0
else:
g["memory_count"] = 0
return ok(graphs)
else:
return {
"content": [{"type": "text", "text": f"Unknown tool: {tool_name}"}],
@ -605,7 +736,6 @@ def handle_tool_call(
def main():
"""MCP stdio server main loop (JSON-RPC 2.0)."""
client = CognitiveMemoryClient()
tools = create_tools()
for line in sys.stdin:
@ -625,7 +755,7 @@ def main():
"capabilities": {"tools": {}},
"serverInfo": {
"name": "cognitive-memory-mcp-server",
"version": "3.0.0",
"version": "3.1.0",
},
},
}
@ -644,7 +774,7 @@ def main():
tool_name = params.get("name")
arguments = params.get("arguments", {})
result = handle_tool_call(tool_name, arguments, client)
result = handle_tool_call(tool_name, arguments)
response = {"jsonrpc": "2.0", "id": message.get("id"), "result": result}
print(json.dumps(response), flush=True)

View File

@ -34,12 +34,22 @@ First run: 2026-02-19 — produced 5186 candidates from 473 memories,
"""
import json
import os
import re
from pathlib import Path
from collections import defaultdict
from itertools import combinations
MEMORY_DIR = Path.home() / ".claude" / "memory"
# Resolve data directory: COGNITIVE_MEMORY_DIR > XDG_DATA_HOME > default
_env_dir = os.environ.get("COGNITIVE_MEMORY_DIR", "")
if _env_dir:
MEMORY_DIR = Path(_env_dir).expanduser()
else:
_xdg_data = os.environ.get("XDG_DATA_HOME", "") or str(
Path.home() / ".local" / "share"
)
MEMORY_DIR = Path(_xdg_data) / "cognitive-memory"
STATE_FILE = MEMORY_DIR / "_state.json"
GRAPH_DIR = MEMORY_DIR / "graph"
EDGES_DIR = GRAPH_DIR / "edges"

View File

@ -3,8 +3,10 @@
# This makes CORE.md auto-load into every session's system prompt.
# Run by cognitive-memory-daily.service or manually.
CORE="/home/cal/.claude/memory/CORE.md"
PROJECTS="/home/cal/.claude/projects"
# Resolve data directory: COGNITIVE_MEMORY_DIR > XDG_DATA_HOME > default
MEMORY_DIR="${COGNITIVE_MEMORY_DIR:-${XDG_DATA_HOME:-$HOME/.local/share}/cognitive-memory}"
CORE="$MEMORY_DIR/CORE.md"
PROJECTS="$HOME/.claude/projects"
if [ ! -f "$CORE" ]; then
echo "ERROR: CORE.md not found at $CORE"

View File

@ -5,11 +5,12 @@
# Only commits if there are actual changes. Safe to run multiple times.
#
# Location: ~/.claude/skills/cognitive-memory/scripts/memory-git-sync.sh
# Repo: ~/.claude/memory/ -> https://git.manticorum.com/cal/claude-memory.git
# Repo: cognitive-memory data dir -> https://git.manticorum.com/cal/claude-memory.git
set -euo pipefail
MEMORY_DIR="$HOME/.claude/memory"
# Resolve data directory: COGNITIVE_MEMORY_DIR > XDG_DATA_HOME > default
MEMORY_DIR="${COGNITIVE_MEMORY_DIR:-${XDG_DATA_HOME:-$HOME/.local/share}/cognitive-memory}"
cd "$MEMORY_DIR"

View File

@ -1,6 +1,6 @@
[Unit]
Description=Cognitive Memory daily maintenance (decay, core, symlinks, git sync fallback)
Description=Cognitive Memory daily maintenance (decay, core, git sync)
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'export PATH="/home/cal/.local/bin:$PATH" && /home/cal/.local/bin/claude-memory decay && /home/cal/.local/bin/claude-memory core && /home/cal/.local/bin/claude-memory-symlinks && /home/cal/.claude/skills/cognitive-memory/scripts/memory-git-sync.sh'
ExecStart=/bin/bash -c 'export PATH="/home/cal/.local/bin:$PATH" && /home/cal/.local/bin/claude-memory decay && /home/cal/.local/bin/claude-memory core && /home/cal/.claude/skills/cognitive-memory/scripts/memory-git-sync.sh'