# My Memory Low-friction capture app for thoughts, text, and voice. Runs as a system tray app on Linux (PySide6). ## Tech Stack - **Language:** Python 3.13+ managed with `uv` - **UI Framework:** PySide6 (Qt 6) - **Transcription:** faster-whisper (CTranslate2) with CUDA auto-fallback to CPU - **Data Format:** Markdown files with YAML frontmatter (`python-frontmatter`) - **Models:** Pydantic v2 ## Running ```bash uv run my-memory # Launch tray app uv run my-memory --capture # Open capture window (or signal running instance) uv run my-memory --board # Open kanban board (or signal running instance) uv run my-memory --download-model # Pre-download Whisper model ``` ## Project Structure ``` src/my_memory/ __main__.py # CLI entry point, single-instance dispatch app.py # QApplication, system tray, IPC via QLocalServer config.py # Config dataclasses + TOML loading (~/.my-memory/config.toml) models.py # Pydantic models: Entry, EntrySource, EntryStatus storage.py # Markdown + YAML frontmatter file I/O schema.py # Auto-generated schema.md for AI agent discovery capture_window.py # Frameless popup: text input, voice recording, transcription board_window.py # Kanban board: On Docket -> In Progress -> Complete audio_recorder.py # sounddevice recording with RMS level signals transcriber.py # faster-whisper with QThread worker, CUDA/CPU fallback ``` ## Data Storage Entries live in `~/.my-memory/entries/YYYY-MM-DD/{uuid}.md` with YAML frontmatter. Voice entries have a paired `.wav` file. ## Entry Statuses - `docket` — Newly captured, waiting to be acted on - `in_progress` — Currently being worked on - `complete` — Done (column is collapsible with 7d/30d/All date filter) - `cancelled` — Archived/discarded (hidden from board, file preserved on disk) ## Key Patterns - **Single instance:** Uses `QLocalServer`/`QLocalSocket` IPC so `--capture` and `--board` flags signal the running instance - **System tray:** App stays resident with `setQuitOnLastWindowClosed(False)`; Ctrl+C is handled via SIGINT + QTimer - **Lazy model loading:** Whisper model is loaded on first transcription, not at startup - **File watching:** Board auto-refreshes via `QFileSystemWatcher` with 300ms debounce - **Collapsible Complete column:** Collapsed by default, shows count badge; when expanded, filters by date (7d default) ## Configuration Optional `~/.my-memory/config.toml`: ```toml [whisper] model_size = "base" # tiny, base, small, medium, large-v3 device = "auto" # auto, cuda, cpu compute_type = "float16" [audio] sample_rate = 16000 channels = 1 ``` ## Gitea - **Repo:** https://git.manticorum.com/cal/my-memory - **Remote name:** origin - **Branch:** master