Most voice apps either require cloud processing or use AI that I can't audit. I wanted something that captures the chaos without judgment, then helps organize it later.
What it does:
Brain dump by voice → Web Speech API transcribes → compromise.js (deterministic NLP) sorts into tasks/notes Everything runs client-side. No servers, no auth, no data collection. Works offline via PWA + IndexedDB Smart date detection: "tomorrow at 3pm" becomes an actual deadline Priority detection, streak tracking, keyboard shortcuts "Speak. Save. Sort it later." workflow Tech stack:
Next.js 15, TypeScript, Tailwind Web Speech API (browser-native, free) compromise.js for NLP (no ML models) IndexedDB for storage ~300 KB bundle size
Tradeoffs I made:
No cloud sync (by design) → manual export/import JSON instead Firefox support is limited (Web Speech API not fully implemented) Classification isn't perfect (~85-90% accuracy) but it's transparent and auditable No mobile keyboard shortcuts (browser limitation)
Why no AI?
Privacy: I wanted zero data leaving the device Speed: Deterministic rules are instant, no API latency Transparency: You can read exactly how classification works Cost: Browser APIs are free Try it: https://tickk.app (just click the mic and brain dump) Code: https://github.com/digitalwareshub/tickk
Built primarily for ADHD brain dumps but useful for anyone who thinks faster than they type.
Happy to discuss architecture decisions, especially around the offline-first approach. Also curious if anyone has ideas for improving task/note classification without adding AI.