Add SSH key instructions and commit-push command

- CLAUDE.md: Add SSH section with homelab/cloud key conventions
- Add commit-push command skill
- Update session memory script

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Cal Corum 2026-02-15 17:51:46 -06:00
parent efebbc7a95
commit a7b5c25766
4 changed files with 274 additions and 49 deletions

View File

@ -25,6 +25,11 @@ Automatic loads are NOT enough — Read loads required CLAUDE.md context along t
- Utilize dependency injection pattern whenever possible
- Never add lazy imports to middle of file
## SSH
- Use `ssh -i ~/.ssh/homelab_rsa cal@<host>` for homelab servers (10.10.0.x)
- Use `ssh -i ~/.ssh/cloud_servers_rsa root@<host>` for cloud servers (Akamai, Vultr)
- Keys are installed on every server — never use passwords or expect password prompts
## Memory Protocol (Cognitive Memory)
- Skill: `~/.claude/skills/cognitive-memory/` | Data: `~/.claude/memory/`
- Session start: Load `~/.claude/memory/CORE.md` and `REFLECTION.md`

45
commands/commit-push.md Normal file
View File

@ -0,0 +1,45 @@
Commit all staged/unstaged changes, push to remote, and optionally create a PR.
**This command IS explicit approval to commit and push — no need to ask for confirmation.**
## Arguments: $ARGUMENTS
If `$ARGUMENTS` contains "pr", also create a pull request after pushing.
## Steps
1. Run `git status` to see what has changed (never use `-uall`)
2. Run `git diff` to see staged and unstaged changes
3. Run `git log --oneline -5` to see recent commit style
4. If there are no changes, say "Nothing to commit" and stop
5. Determine the remote name and current branch:
- Remote: use `git remote` (if multiple, prefer `origin`; for `~/.claude` use `homelab`)
- Branch: use `git branch --show-current`
6. Stage all relevant changed files (prefer specific files over `git add -A` — avoid secrets, .env, credentials)
7. Draft a concise commit message following the repo's existing style (focus on "why" not "what")
8. Create the commit with `Co-Authored-By: Claude <model> <noreply@anthropic.com>` where `<model>` is the model currently in use (check your own model identity — e.g., Opus 4.6, Sonnet 4.5, Haiku 4.5)
9. Push to the remote with `-u` flag: `git push -u <remote> <branch>`
10. Confirm success with the commit hash
## If `pr` argument is present
After pushing, create a pull request:
1. Detect the hosting platform:
- If remote URL contains `github.com` → use `gh pr create`
- If remote URL contains `git.manticorum.com` or other Gitea host → use `tea pulls create`
2. Determine the default branch: `git symbolic-ref refs/remotes/<remote>/HEAD | sed 's|.*/||'` (fallback to `main`)
3. Run `git log <default-branch>..HEAD --oneline` to summarize all commits in the PR
4. Create the PR:
- **GitHub**: `gh pr create --base <default-branch> --title "Title" --body "..."`
- **Gitea**: `tea pulls create --head <branch> --base <default-branch> --title "Title" --description "..."`
5. Include a summary section and test plan in the PR body
6. Return the PR URL
## Important
- This command IS explicit approval to commit, push, and (if requested) create a PR
- Do NOT ask for confirmation — the user invoked this command intentionally
- If push fails, show the error and suggest remediation
- Never force push unless the user explicitly says to
- If on `main` branch and `pr` is requested, warn that PRs are typically from feature branches

View File

@ -11,31 +11,101 @@ import json
import re
import subprocess
import sys
from datetime import datetime
from pathlib import Path
LOG_FILE = Path("/tmp/session-memory-hook.log")
def log(msg: str):
"""Append a timestamped message to the hook log file."""
with open(LOG_FILE, "a") as f:
f.write(f"{datetime.now().isoformat(timespec='seconds')} {msg}\n")
def log_separator():
"""Write a visual separator to the log for readability between sessions."""
with open(LOG_FILE, "a") as f:
f.write(f"\n{'='*72}\n")
f.write(
f" SESSION MEMORY HOOK — {datetime.now().isoformat(timespec='seconds')}\n"
)
f.write(f"{'='*72}\n")
def read_stdin():
"""Read the hook input JSON from stdin."""
try:
return json.loads(sys.stdin.read())
except (json.JSONDecodeError, EOFError):
raw = sys.stdin.read()
log(f"[stdin] Raw input length: {len(raw)} chars")
data = json.loads(raw)
log(f"[stdin] Parsed keys: {list(data.keys())}")
return data
except (json.JSONDecodeError, EOFError) as e:
log(f"[stdin] ERROR: Failed to parse input: {e}")
return {}
def read_transcript(transcript_path: str) -> list[dict]:
"""Read JSONL transcript file into a list of message dicts."""
"""Read JSONL transcript file into a list of normalized message dicts.
Claude Code transcripts use a wrapper format where each line is:
{"type": "user"|"assistant"|..., "message": {"role": ..., "content": ...}, ...}
This function unwraps them into the inner {"role": ..., "content": ...} dicts
that the rest of the code expects. Non-message entries (like file-history-snapshot)
are filtered out.
"""
messages = []
path = Path(transcript_path)
if not path.exists():
log(f"[transcript] ERROR: File does not exist: {transcript_path}")
return messages
file_size = path.stat().st_size
log(f"[transcript] Reading {transcript_path} ({file_size} bytes)")
parse_errors = 0
skipped_types = {}
line_num = 0
with open(path) as f:
for line in f:
for line_num, line in enumerate(f, 1):
line = line.strip()
if line:
try:
messages.append(json.loads(line))
except json.JSONDecodeError:
continue
if not line:
continue
try:
raw = json.loads(line)
except json.JSONDecodeError:
parse_errors += 1
continue
# Claude Code transcript format: wrapper with "type" and "message" keys
# Unwrap to get the inner message dict with "role" and "content"
if "message" in raw and isinstance(raw["message"], dict):
inner = raw["message"]
# Carry over the wrapper type for logging
wrapper_type = raw.get("type", "unknown")
if "role" not in inner:
inner["role"] = wrapper_type
messages.append(inner)
elif "role" in raw:
# Already in the expected format (future-proofing)
messages.append(raw)
else:
# Non-message entry (file-history-snapshot, etc.)
entry_type = raw.get("type", "unknown")
skipped_types[entry_type] = skipped_types.get(entry_type, 0) + 1
if parse_errors:
log(f"[transcript] WARNING: {parse_errors} lines failed to parse")
if skipped_types:
log(f"[transcript] Skipped non-message entries: {skipped_types}")
log(f"[transcript] Loaded {len(messages)} messages from {line_num} lines")
# Log role breakdown
role_counts = {}
for msg in messages:
role = msg.get("role", "unknown")
role_counts[role] = role_counts.get(role, 0) + 1
log(f"[transcript] Role breakdown: {role_counts}")
return messages
@ -50,6 +120,7 @@ def find_last_memory_command_index(messages: list[dict]) -> int:
Returns -1 if no claude-memory commands were found.
"""
last_index = -1
found_commands = []
for i, msg in enumerate(messages):
if msg.get("role") != "assistant":
continue
@ -66,6 +137,14 @@ def find_last_memory_command_index(messages: list[dict]) -> int:
cmd = block.get("input", {}).get("command", "")
if "claude-memory" in cmd:
last_index = i
found_commands.append(f"msg[{i}]: {cmd[:100]}")
if found_commands:
log(f"[cutoff] Found {len(found_commands)} claude-memory commands:")
for fc in found_commands:
log(f"[cutoff] {fc}")
log(f"[cutoff] Will slice after message index {last_index}")
else:
log("[cutoff] No claude-memory commands found — processing full transcript")
return last_index
@ -107,6 +186,14 @@ def extract_tool_uses(messages: list[dict]) -> list[dict]:
for block in content:
if isinstance(block, dict) and block.get("type") == "tool_use":
tool_uses.append(block)
# Log tool use breakdown
tool_counts = {}
for tu in tool_uses:
name = tu.get("name", "unknown")
tool_counts[name] = tool_counts.get(name, 0) + 1
log(f"[tools] Extracted {len(tool_uses)} tool uses: {tool_counts}")
return tool_uses
@ -119,6 +206,7 @@ def find_git_commits(tool_uses: list[dict]) -> list[str]:
cmd = tu.get("input", {}).get("command", "")
if "git commit" in cmd:
commits.append(cmd)
log(f"[commits] Found {len(commits)} git commit commands")
return commits
@ -131,6 +219,9 @@ def find_files_edited(tool_uses: list[dict]) -> set[str]:
fp = tu.get("input", {}).get("file_path", "")
if fp:
files.add(fp)
log(f"[files] Found {len(files)} edited files:")
for f in sorted(files):
log(f"[files] {f}")
return files
@ -150,6 +241,7 @@ def find_errors_encountered(messages: list[dict]) -> list[str]:
error_text = extract_text_content({"content": block.get("content", "")})
if error_text and len(error_text) > 10:
errors.append(error_text[:500])
log(f"[errors] Found {len(errors)} error tool results")
return errors
@ -168,16 +260,23 @@ def detect_project(cwd: str, files_edited: set[str]) -> str:
for path in all_paths:
for indicator, project in project_indicators.items():
if indicator in path.lower():
log(
f"[project] Detected '{project}' from path containing '{indicator}': {path}"
)
return project
# Fall back to last directory component of cwd
return Path(cwd).name
fallback = Path(cwd).name
log(f"[project] No indicator matched, falling back to cwd name: {fallback}")
return fallback
def build_session_summary(messages: list[dict], cwd: str) -> dict | None:
"""Analyze the transcript and build a summary of storable events."""
log(f"[summary] Building summary from {len(messages)} messages, cwd={cwd}")
if len(messages) < 4:
# Too short to be meaningful
return None
log(f"[summary] SKIP: only {len(messages)} messages, need at least 4")
return "too_short"
tool_uses = extract_tool_uses(messages)
commits = find_git_commits(tool_uses)
@ -194,31 +293,73 @@ def build_session_summary(messages: list[dict], cwd: str) -> dict | None:
assistant_texts.append(text)
full_assistant_text = "\n".join(assistant_texts)
log(
f"[summary] Assistant text: {len(full_assistant_text)} chars from {len(assistant_texts)} messages"
)
# Detect what kind of work was done
work_types = set()
if commits:
work_types.add("commit")
if errors:
work_types.add("debugging")
if any("test" in f.lower() for f in files_edited):
work_types.add("testing")
if any(kw in full_assistant_text.lower() for kw in ["bug", "fix", "error", "issue"]):
work_types.add("fix")
if any(kw in full_assistant_text.lower() for kw in ["refactor", "restructure", "reorganize"]):
work_types.add("refactoring")
if any(kw in full_assistant_text.lower() for kw in ["new feature", "implement", "add support"]):
work_types.add("feature")
if any(kw in full_assistant_text.lower() for kw in ["deploy", "production", "release"]):
work_types.add("deployment")
if any(kw in full_assistant_text.lower() for kw in ["config", "setup", "install", "configure"]):
work_types.add("configuration")
if any(kw in full_assistant_text.lower() for kw in ["hook", "script", "automat"]):
work_types.add("automation")
keyword_checks = {
"commit": lambda: bool(commits),
"debugging": lambda: bool(errors),
"testing": lambda: any("test" in f.lower() for f in files_edited),
"fix": lambda: any(
kw in full_assistant_text.lower() for kw in ["bug", "fix", "error", "issue"]
),
"refactoring": lambda: any(
kw in full_assistant_text.lower()
for kw in ["refactor", "restructure", "reorganize"]
),
"feature": lambda: any(
kw in full_assistant_text.lower()
for kw in ["new feature", "implement", "add support"]
),
"deployment": lambda: any(
kw in full_assistant_text.lower()
for kw in ["deploy", "production", "release"]
),
"configuration": lambda: any(
kw in full_assistant_text.lower()
for kw in ["config", "setup", "install", "configure"]
),
"automation": lambda: any(
kw in full_assistant_text.lower() for kw in ["hook", "script", "automat"]
),
"tooling": lambda: any(
kw in full_assistant_text.lower()
for kw in [
"skill",
"command",
"slash command",
"commit-push",
"claude code command",
]
),
"creation": lambda: any(
kw in full_assistant_text.lower()
for kw in ["create a ", "created", "new file", "wrote a"]
),
}
for work_type, check_fn in keyword_checks.items():
matched = check_fn()
if matched:
work_types.add(work_type)
log(f"[work_type] MATCH: {work_type}")
else:
log(f"[work_type] no match: {work_type}")
if not work_types and not files_edited:
# Likely a research/chat session, skip
return None
log("[summary] SKIP: no work types detected and no files edited")
# Log a snippet of assistant text to help debug missed keywords
snippet = full_assistant_text[:500].replace("\n", " ")
log(f"[summary] Assistant text preview: {snippet}")
return "no_work"
log(
f"[summary] Result: project={project}, work_types={sorted(work_types)}, "
f"commits={len(commits)}, files={len(files_edited)}, errors={len(errors)}"
)
return {
"project": project,
@ -258,7 +399,9 @@ def build_memory_content(summary: dict) -> str:
work_desc = ", ".join(sorted(summary["work_types"]))
parts.append(f"Work types: {work_desc}")
parts.append(f"Session size: {summary['message_count']} messages, {summary['tool_use_count']} tool calls")
parts.append(
f"Session size: {summary['message_count']} messages, {summary['tool_use_count']} tool calls"
)
return "\n".join(parts)
@ -276,7 +419,9 @@ def determine_memory_type(summary: dict) -> str:
return "code_pattern"
if "deployment" in wt:
return "workflow"
if "automation" in wt:
if "automation" in wt or "tooling" in wt:
return "workflow"
if "creation" in wt:
return "workflow"
return "general"
@ -319,55 +464,85 @@ def store_memory(summary: dict):
tag_str = ",".join(tags)
cmd = [
"claude-memory", "store",
"--type", mem_type,
"--title", title,
"--content", content,
"--tags", tag_str,
"--importance", importance,
"claude-memory",
"store",
"--type",
mem_type,
"--title",
title,
"--content",
content,
"--tags",
tag_str,
"--importance",
importance,
"--episode",
]
log(f"[store] Memory type: {mem_type}, importance: {importance}")
log(f"[store] Title: {title}")
log(f"[store] Tags: {tag_str}")
log(f"[store] Content length: {len(content)} chars")
log(f"[store] Command: {' '.join(cmd)}")
try:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
if result.returncode == 0:
print(f"Session memory stored: {title}", file=sys.stderr)
log(f"[store] SUCCESS: {title}")
if result.stdout.strip():
log(f"[store] stdout: {result.stdout.strip()[:200]}")
else:
print(f"Memory store failed: {result.stderr}", file=sys.stderr)
log(f"[store] FAILED (rc={result.returncode}): {result.stderr.strip()}")
if result.stdout.strip():
log(f"[store] stdout: {result.stdout.strip()[:200]}")
except subprocess.TimeoutExpired:
print("Memory store timed out", file=sys.stderr)
log("[store] FAILED: claude-memory timed out after 10s")
except FileNotFoundError:
log("[store] FAILED: claude-memory command not found in PATH")
except Exception as e:
print(f"Memory store error: {e}", file=sys.stderr)
log(f"[store] FAILED: {type(e).__name__}: {e}")
def main():
log_separator()
hook_input = read_stdin()
transcript_path = hook_input.get("transcript_path", "")
cwd = hook_input.get("cwd", "")
log(f"[main] cwd: {cwd}")
log(f"[main] transcript_path: {transcript_path}")
if not transcript_path:
print("No transcript path provided", file=sys.stderr)
log("[main] ABORT: no transcript path provided")
sys.exit(0)
messages = read_transcript(transcript_path)
if not messages:
log("[main] ABORT: empty transcript")
sys.exit(0)
total_messages = len(messages)
# Only process messages after the last claude-memory command to avoid
# duplicating memories that were already stored during the session.
cutoff = find_last_memory_command_index(messages)
if cutoff >= 0:
messages = messages[cutoff + 1:]
messages = messages[cutoff + 1 :]
log(f"[main] After cutoff: {len(messages)} of {total_messages} messages remain")
if not messages:
print("No new messages after last claude-memory command", file=sys.stderr)
log("[main] ABORT: no new messages after last claude-memory command")
sys.exit(0)
else:
log(f"[main] Processing all {total_messages} messages (no cutoff)")
summary = build_session_summary(messages, cwd)
if summary is None:
print("Session too short or no significant work detected", file=sys.stderr)
if not isinstance(summary, dict):
log(f"[main] ABORT: build_session_summary returned '{summary}'")
sys.exit(0)
store_memory(summary)
log("[main] Done")
if __name__ == "__main__":