From 9e3be98af9742b5d0691934837d54a597e343bf1 Mon Sep 17 00:00:00 2001 From: Cal Corum Date: Thu, 19 Feb 2026 16:03:22 -0600 Subject: [PATCH] store: Semantic search enabled by default for memory recall --- ...led-by-default-for-memory-recall-e94ae9.md | 22 +++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 graph/decisions/semantic-search-enabled-by-default-for-memory-recall-e94ae9.md diff --git a/graph/decisions/semantic-search-enabled-by-default-for-memory-recall-e94ae9.md b/graph/decisions/semantic-search-enabled-by-default-for-memory-recall-e94ae9.md new file mode 100644 index 00000000000..e5e2a4589a5 --- /dev/null +++ b/graph/decisions/semantic-search-enabled-by-default-for-memory-recall-e94ae9.md @@ -0,0 +1,22 @@ +--- +id: e94ae963-9bf9-4adc-b0d7-ec18d9e70999 +type: decision +title: "Semantic search enabled by default for memory recall" +tags: [cognitive-memory, decision, semantic-search, configuration] +importance: 0.7 +confidence: 0.8 +created: "2026-02-19T22:03:22.589620+00:00" +updated: "2026-02-19T22:03:22.589620+00:00" +--- + +Changed `recall()` default from `semantic=False` to `semantic=True`. All recall calls now use semantic+keyword merge when embeddings exist. + +**Rationale:** With mtime-based embeddings caching, warm semantic recall takes ~200ms — imperceptible within Claude Code sessions where API roundtrips are measured in seconds. The quality improvement from conceptual matching justifies the small overhead. + +**Changes made:** +- `client.py`: `recall()` parameter default `semantic=True` +- `client.py`: CLI flag changed from `--semantic` (opt-in) to `--no-semantic` (opt-out) +- `mcp_server.py`: `arguments.get("semantic", True)`, updated tool description +- `SKILL.md`: Updated all examples and documentation + +**Merge weights:** 60% semantic, 40% keyword. Use `semantic=false` / `--no-semantic` when speed matters more than depth (~3ms vs ~200ms).