We launched Ztalk (https://www.producthunt.com/products/ztalk-ai) on Product Hunt 10 days back and ended the day with the second highest upvotes (545). Here's a short demo video (https://www.youtube.com/watch?v=FYM9einhyAQ).
Ztalk is a real-time voice-to-voice translation app for Zoom, Google Meet, and Teams.
It adds live captions and translated voice — no extensions or plugins. Works on Mac & Windows.
After launch, we were flooded with demo requests from individuals and companies — partly driven by AI newsletter coverage — across a surprisingly wide range of use cases:
Candidates interviewing for roles in other countries
NGOs working in conflict zones
SaaS companies onboarding customers in different languages
Online therapy/support groups
Cross-border scrums, demos, and board meetings
We saw two dominant usage patterns:
Passive listening: Large webinars where users want to hear translations without speaking
Active participation: Small group conversations with real-time back-and-forth
In both cases, latency and accuracy are critical. Our internal benchmark: if we achieve <500ms latency with >95% accuracy, this could unlock a ~$10B+ market. As seen with Sanas and Krisp, companies are already building fast-growing businesses from accent translation alone.
Tech Stack & Experiments
Surprisingly, there’s no widely available API/SDK that converts streaming voice input → translated voice output in real-time.
OpenAI’s real-time API (which supports voice-to-voice translation) often breaks out of its translation role and starts responding conversationally — even with strict prompting. It also has a hardwired “no interruption” behavior, meaning it won’t speak if someone else is talking — making it unusable in overlapping conversations, which are common in live meetings.
So we built the standard 3-step pipeline:
- ASR (Speech-to-Text)
- Translation
- TTS (Text-to-Speech)
Each step had challenges:
1. Speech-to-Text:
- Most APIs (Azure, AWS, ElevenLabs) expect WAV/FLAC chunks — not true streaming.
- We experimented with audio chunking over WebRTC/WebSocket — Silero was usable but often clipped mid-sentence.
- Whisper lags behind newer models in speed, streaming, and accuracy.
- GPT-4o’s streaming API had the best balance between latency and context, and supports true streaming input.
2. Translation:
- Many providers do well here.
- Smaller local models work for specific pairs (e.g., en↔es, en↔fr) with >95% accuracy.
3. TTS:
- The Web Speech API is fast but robotic.
- ElevenLabs and Cartesia produce expressive voices, but their pricing isn't viable for our target users.
- We found good results with VITS (conditional variational autoencoder), offering diverse voice options per language.
With recent AI breakthroughs, I’d love to open a discussion on how real-time translation is evolving — and where it might realistically go:
- Are there newer APIs or OSS projects that simplify the voice-to-voice stack?
- Can on-device models realistically hit sub-400ms round-trips?
- Any merged pipelines (ASR + translation + TTS) trained end-to-end?
- Could forward-leaning models reduce latency in verb-final languages like Hindi/Japanese by predicting intent early?
Also: What does good product design look like if 1.5–2.5s latency remains for the foreseeable future?
We currently support full-duplex calls via audio routing and virtual mixing, with per-user toggles to choose original vs. translated voice. It works well, though we’re still refining UX for edge cases like overlapping speech and noisy input.
Would love to hear your thoughts or stack choices if you've built anything similar.
aksinghal654•8h ago
As someone on international calls daily, this solves a real pain point. Well done!
siddhant_mohan•8h ago
This is a space I’ve been watching closely — what TTS voices did you find most natural across languages?
riteshs•8h ago
Interesting take on real-time translation. How do you handle speech overlap when multiple people talk at once?
shivamitm•7h ago
Would be interesting to experiment with forward-leaning translation + low-confidence overlays — e.g., show a "probable translation" immediately, then replace it once full intent is clearer. Might reduce perceived latency even if real latency stays ~2s.
SiddhantMalik•7h ago
If you haven’t already, look at Deepgram’s streaming ASR for speaker turn detection — it handles overlap better than OpenAI’s strict no-interruption rule and might pair well with an async translation layer.
anuragdt•5h ago
Interesting use case
brajendra01872•5h ago
Really interesting approach with full-duplex routing and virtual drivers. Curious if you've looked into low-level WASAPI or CoreAudio hooks to reduce routing overhead on Windows/macOS — might help avoid the need for 3rd-party loopback tools entirely.
poorva•3h ago
The hardest part of real-time translation isn’t translation — it’s audio synchronization, UX flow, and managing expectations under variable latency. Really curious how you're thinking about fallback modes (e.g., subtitle-only if TTS lags).
DhirajSingh•1h ago
Have you considered training a small end-to-end voice2voice model using student-teacher distillation from the GPT-4o pipeline? Even a narrow domain (e.g., customer support) could benefit from a custom fast model that bypasses intermediate text.
kshitijzeoauto•8h ago
We saw two dominant usage patterns: Passive listening: Large webinars where users want to hear translations without speaking Active participation: Small group conversations with real-time back-and-forth
In both cases, latency and accuracy are critical. Our internal benchmark: if we achieve <500ms latency with >95% accuracy, this could unlock a ~$10B+ market. As seen with Sanas and Krisp, companies are already building fast-growing businesses from accent translation alone. Tech Stack & Experiments Surprisingly, there’s no widely available API/SDK that converts streaming voice input → translated voice output in real-time. OpenAI’s real-time API (which supports voice-to-voice translation) often breaks out of its translation role and starts responding conversationally — even with strict prompting. It also has a hardwired “no interruption” behavior, meaning it won’t speak if someone else is talking — making it unusable in overlapping conversations, which are common in live meetings. So we built the standard 3-step pipeline: - ASR (Speech-to-Text) - Translation - TTS (Text-to-Speech)
Each step had challenges: 1. Speech-to-Text: - Most APIs (Azure, AWS, ElevenLabs) expect WAV/FLAC chunks — not true streaming. - We experimented with audio chunking over WebRTC/WebSocket — Silero was usable but often clipped mid-sentence. - Whisper lags behind newer models in speed, streaming, and accuracy. - GPT-4o’s streaming API had the best balance between latency and context, and supports true streaming input. 2. Translation: - Many providers do well here. - Smaller local models work for specific pairs (e.g., en↔es, en↔fr) with >95% accuracy. 3. TTS: - The Web Speech API is fast but robotic. - ElevenLabs and Cartesia produce expressive voices, but their pricing isn't viable for our target users. - We found good results with VITS (conditional variational autoencoder), offering diverse voice options per language.
With recent AI breakthroughs, I’d love to open a discussion on how real-time translation is evolving — and where it might realistically go: - Are there newer APIs or OSS projects that simplify the voice-to-voice stack? - Can on-device models realistically hit sub-400ms round-trips? - Any merged pipelines (ASR + translation + TTS) trained end-to-end? - Could forward-leaning models reduce latency in verb-final languages like Hindi/Japanese by predicting intent early? Also: What does good product design look like if 1.5–2.5s latency remains for the foreseeable future? We currently support full-duplex calls via audio routing and virtual mixing, with per-user toggles to choose original vs. translated voice. It works well, though we’re still refining UX for edge cases like overlapping speech and noisy input. Would love to hear your thoughts or stack choices if you've built anything similar.