frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•15s ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
1•tosh•38s ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•52s ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•3m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
3•sakanakana00•6m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•9m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•9m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•11m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
3•Nive11•11m ago•4 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•15m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•17m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•20m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•22m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•26m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•28m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•31m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•31m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•32m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•37m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•43m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•44m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•49m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•51m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•57m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•1h ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•1h ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
3•senekor•1h ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
2•myk-e•1h ago•0 comments
Open in hackernews

Open-source framework for real-time AI voice

https://github.com/videosdk-live/agents
27•sagarkava•6mo ago

Comments

sagarkava•6mo ago
Hey

I’m Sagar, co-founder of VideoSDK.

I'm beyond excited to share what we've been building: VideoSDK Real-Time AI Agents. Today, voice is becoming the new UI.

We expect agents to feel human, to understand us, respond instantly, and work seamlessly across web, mobile, and even telephony. But, to achieve this, developers have to stitch together: STT, LLM, TTS, glued with HTTP endpoints and, a prayer.

This most often results in agents that sound robotic, hallucinations and fail in product environments without observability. So we built something to solve that.

Now, we are open sourcing it!

Here’s what it offers:

- Global WebRTC infra with <80ms latency - Native turn detection, VAD, and noise suppression - Modular pipelines for STT, LLM, TTS, avatars, and - real-time model switching - Built-in RAG + memory for grounding and hallucination resistance - SDKs for web, mobile, Unity, IoT, and telephony — no glue code needed - Agent Cloud to scale infinitely with one-click deployments — or self-host with full control Think of it like moving from a walkie-talkie to a modern network tower that handles thousands of calls.

VideoSDK gives you the infrastructure to build voice agents that actually work in the real world, at scale.

I'd love your thoughts and questions! Happy to dive deep into architecture, use cases, or crazy edge cases you've been struggling with.

esafak•6mo ago
Do you watermark the output to enable fraud detection?
bigcat12345678•6mo ago
Good! Is there way to prompt the TTS output tone like elevenlabs https://elevenlabs.io/docs/best-practices/prompting/eleven-v...

We are building AI companions, the tone prompting would be great

bigcat12345678•6mo ago
Got to hn frontpage and ignore comments on the post...
httpsterio•6mo ago
and made three accounts to add more praise lol. This should be removed.
sagarkava•6mo ago
Hey bigcat12345678, great question!

Yes, with VideoSDK's Real-Time AI Agents, you can control the TTS output tone, either via prompt engineering (if your TTS provider supports it, like ElevenLabs) or by integrating custom models that support tonal control directly. Our modular pipeline architecture makes it easy to plug in providers like ElevenLabs and pass tone/style prompts dynamically per utterance.

We actually support ElevenLabs out of the box. You can check out the integration details here: https://docs.videosdk.live/ai_agents/plugins/tts/eleven-labs

So if you're building AI companions and want them to sound calm, excited, empathetic, etc., you can absolutely prompt for those tones in real time, or even switch voices or tones mid-conversation based on context or user emotion.

Let us know what you're building. Happy to dive deeper into tone control setups or help debug a specific flow!

chopete3•6mo ago
Is this running in production at any site/company?.
sagarkava•6mo ago
Yes, VideoSDK Real-Time AI Agents are already running in production with several partners across different domains — from healthcare assistants to customer support agents and AI companions. These deployments are handling real user interactions at scale, across web, mobile, and even telephony.

If you're curious about specific use cases or want to explore how it can fit into your product, happy to share more details or walk through an example.

vivzkestrel•6mo ago
how does it compare to chatterbox TTS? https://github.com/resemble-ai/chatterbox/
sagarkava•6mo ago
Chatterbox is great for local/private TTS with Resemble AI.

voice agent SDK is broader it's full real-time voice infra with STT, LLM, TTS, memory, and RAG built in. You can plug in Resemble, ElevenLabs, etc., and deploy across web, mobile, and telephony with <80ms latency.

monadoid•6mo ago
Why would I use this vs @openai/openai-agents-python (or openai-agents-ts) - the new realtime agents SDKs?

There are so many AI frameworks out there that live & die so quickly that I am generally hard pressed to use any of these unless there is some killer feature I absolutely need.

avsdk•6mo ago
We're not a model ourselves—we provide the infrastructure that enables you to deploy and use any model of your choice, while simplifying communication through AI agents.
sagarkava•6mo ago
Totally fair. The space moves fast, and it's smart to be skeptical. Here's how VideoSDK Real-Time AI Agents stand out from OpenAI agents SDKs and others:

1. Voice infra included OpenAI agents handle logic and memory, but they don’t include real-time audio infra.

VideoSDK gives you:

- <80ms global WebRTC latency

- Built-in turn-taking, VAD, and noise suppression

- Real-time voice across web, mobile, IoT, and telephony

2. Fully modular pipeline No vendor lock-in. Swap STT, LLM, TTS, and avatars. Change models live per user or use case. Want ElevenLabs for tone and OpenAI for reasoning? Easy.

3. Native RAG + memory Integrated long-term memory and retrieval help reduce hallucinations and keep conversations grounded.

4. Scale-ready Deploy globally with one click using Agent Cloud or self-host with full control. Built for production use.

If you're building real-time, voice-first agents that need to work across platforms and scale reliably, this is purpose-built for that.

Happy to dive into your use case if you're exploring options.

oldgregg•6mo ago
No demo? No demo video? Nothing?
sagarkava•6mo ago
Hey! Quick video overview: https://www.youtube.com/watch?v=m_oc1GDyhrc

Live demo to try it out: https://aiagent.tryvideosdk.live