Low-friction thought capture (text + voice)
Go to file
Cal Corum 53caf193af fix: fall back to CPU when CUDA inference fails at transcribe time
CUDA model loading can succeed even when runtime libs like libcublas
are missing — the error only surfaces during model.transcribe(). Catch
that and retry on CPU so transcription still works without the full
CUDA toolkit installed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 14:38:58 -06:00
src/my_memory fix: fall back to CPU when CUDA inference fails at transcribe time 2026-02-11 14:38:58 -06:00
.gitignore feat: initial commit — voice/text memory capture with kanban board 2026-02-11 13:55:21 -06:00
.python-version feat: initial commit — voice/text memory capture with kanban board 2026-02-11 13:55:21 -06:00
my-memory.desktop feat: initial commit — voice/text memory capture with kanban board 2026-02-11 13:55:21 -06:00
pyproject.toml feat: initial commit — voice/text memory capture with kanban board 2026-02-11 13:55:21 -06:00
README.md feat: initial commit — voice/text memory capture with kanban board 2026-02-11 13:55:21 -06:00