aside is a ~5MB Rust binary that records meetings and feeds them into an AI-native transcription-to-vault workflow:
1. Record — captures mic + system audio simultaneously (stereo WAV, left=you, right=them) with a timestamped TUI editor for real-time notes 2. Transcribe — local transcription via whisper.cpp, with a 7-stage cleanup pipeline that strips hallucinations, deduplicates backchannels, and merges fragments 3. Align — interleaves your memo lines with the transcript on a shared timeline, so you can see what was said around each thing you noted 4. Distill — a Claude Code skill searches your Obsidian vault for related notes, then writes a structured note back into it with [[wikilinks]] to your existing thinking
The distillation step is the part I'm most interested in feedback on. The entire pipeline is a 240-line markdown file (a Claude Code "skill") that the LLM follows as instructions. It treats your memo lines as attention signals — what you wrote down mid-call gets priority over what you didn't. Lines you edited mid-meeting get weighted even higher. It searches your vault semantically and by structure (tags, people, wikilinks), then weaves connections into the output note.
The output doesn't read like a meeting summary with action items. It reads like what you would have written up yourself if you had the time — highlighting what mattered to you, connected to what you've already been thinking about.
The skill is plain text. You can read it in the repo, fork it, change how it prioritizes topics or formats notes. There's no black box between the transcript and the vault note — just markdown instructions an LLM follows.
This matters because the workflow is vault-native end to end: sessions live inside the vault, distilled notes land where your thinking already lives, and the AI step uses your vault's own language and structure as context. The meeting note isn't a standalone summary — it's a node in your existing knowledge graph.
Technical details: 3.1k lines total. Rust binary handles recording + TUI (cpal for audio, ratatui for the editor, lock-free ring buffers for real-time capture). Python script handles transcription cleanup. Claude Code skill handles distillation. Everything runs locally except the optional LLM call.
Dual audio capture solves hybrid meetings — conference room mic + remote participants on speaker both get transcribed on separate channels.
No accounts, no telemetry, Apache 2.0. Install via `brew install jshph/aside/aside`