Because it knows who's speaking in real time, the AI can do things that transcript-only tools can't: "What did Sarah commit to?", or "Based on what you know about the CTO from our last three meetings, what question should I ask right now?" It builds speaker profiles across meetings, so the context compounds over time.
It captures system audio via macOS APIs, runs speech-to-text locally using Moonshine on Apple Silicon, and does speaker identification with TitaNet neural embeddings, all on-device. The only thing that touches the cloud is AI chat, and only when you explicitly ask it a question, and then it only sends transcript text, never audio.
Built with Rust/Tauri for the native app, React for the UI, and a Python sidecar for the ML pipeline.
Works with any meeting platform (Zoom, Meet, Teams, whatever) since it just listens to system audio. No calendar access, no integrations, no account required.
Free tier has unlimited transcription. I'm a solo dev and would love feedback on what could be improved.
blakers95•1h ago
Known limitations right now: English only, macOS only (Apple Silicon), no team/sharing features.
I would love to hear what you'd use it for and what's missing.