I built DiscoMonday, a voice AI guide that lets any visitor get real-time narration based on their location — using only their phone and earbuds. No hardware, no app install, and no code required.
This is a tech preview of the core engine, not the full SaaS product yet. You can:
- Use GPS or tap a map to set your location
- Hear the AI start speaking based on where you are
- Interrupt mid-sentence and ask a follow-up question
- Get a real-time, spoken reply via OpenAI + LiveKit
Try the demo: https://discomonday.com
If GPS isn't enabled, tap the map to simulate a location. If the server is overloaded, you’ll still be added to the waitlist (all signups are). Mic and location permissions are required — this is an audio-first experience.
Why I built this: On a city trip, my wife and I kept asking “What’s that building?” and no app could answer. So I hacked together a prototype. The current system is a step toward making voice-based, location-aware interfaces dead simple to deploy — for museums, exhibitions, tourism, and more.
What I’d love feedback on:
- Does the voice feel fast and natural enough?
- Was anything confusing or rough?
- What use cases would you personally want this for?
Privacy note: Your privacy matters. We strip all personal data and only keep anonymized usage stats to improve the experience.
—
I’m Mark I. Matsushima, a solo founder based in Okinawa, Japan. Built this with Next.js, OpenAI’s real-time API, LiveKit, and AWS Lambda / EC2. Appreciate your feedback!