frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Launch HN: Cekura (YC F24) – Testing and monitoring for voice and chat AI agents

16•atarus•2h ago
Hey HN - we're Tarush, Sidhant, and Shashij from Cekura (https://www.cekura.ai). We've been running voice agent simulation for 1.5 years, and recently extended the same infrastructure to chat. Teams use Cekura to simulate real user conversations, stress-test prompts and LLM behavior, and catch regressions before they hit production.

The core problem: you can't manually QA an AI agent. When you ship a new prompt, swap a model, or add a tool, how do you know the agent still behaves correctly across the thousands of ways users might interact with it? Most teams resort to manual spot-checking (doesn't scale), waiting for users to complain (too late), or brittle scripted tests.

Our answer is simulation: synthetic users interact with your agent the way real users do, and LLM-based judges evaluate whether it responded correctly - across the full conversational arc, not just single turns. Three things make this actually work: Scenario generation + real conversation import - Our scenario generation agent bootstraps your test suite from a description of your agent. But real users find paths no generator anticipates, so we also ingest your production conversations and automatically extract test cases from them. Your coverage evolves as your users do.

Mock tool platform - Agents call tools. Running simulations against real APIs is slow and flaky. Our mock tool platform lets you define tool schemas, behavior, and return values so simulations exercise tool selection and decision-making without touching production systems.

Deterministic, structured test cases - LLMs are stochastic. A CI test that passes "most of the time" is useless. Rather than free-form prompts, our evaluators are defined as structured conditional action trees: explicit conditions that trigger specific responses, with support for fixed messages when word-for-word precision matters. This means the synthetic user behaves consistently across runs - same branching logic, same inputs - so a failure is a real regression, not noise.

Cekura also monitors your live agent traffic. The obvious alternative here is a tracing platform like Langfuse or LangSmith - and they're great tools for debugging individual LLM calls. But conversational agents have a different failure mode: the bug isn't in any single turn, it's in how turns relate to each other. Take a verification flow that requires name, date of birth, and phone number before proceeding - if the agent skips asking for DOB and moves on anyway, every individual turn looks fine in isolation. The failure only becomes visible when you evaluate the full session as a unit. Cekura is built around this from the ground up. Where tracing platforms evaluate turn by turn, Cekura evaluates the full session. Imagine a banking agent where the user fails verification in step 1, but the agent hallucinates and proceeds anyway. A turn-based evaluator sees step 3 (address confirmation) and marks it green - the right question was asked. Cekura's judge sees the full transcript and flags the session as failed because verification never succeeded.

Try us out at https://www.cekura.ai - 7-day free trial, no credit card required. Paid plans from $30/month.

We also put together a product video if you'd like to see it in action: https://www.youtube.com/watch?v=n8FFKv1-nMw. The first minute dives into quick onboarding - and if you want to jump straight to the results, skip to 8:40.

Curious what the HN community is doing - how are you testing behavioral regressions in your agents? What failure modes have hurt you most? Happy to dig in below!

Comments

sidhantkabra•1h ago
Was really fun building this - would love feedback from the HN community and get insights on your current process.
moinism•50m ago
congrats on the launch! do you guys have anything planned to test chat agents directly in the ui? I have an agent, but no exposed api so can't really use your product even though I have a genuine need.
atarus•42m ago
Yes, we do support integrations with different chat agent providers and also SMS/Whastap agents where you can just drop a number of the agent.

Let us know how your agent can be connected to and we can advise best on how to test it.

FailMore•14m ago
Any ideas how to solve the agent's don't have total common sense problem?

I have found when using agents to verify agents, that the agent might observe something that a human would immediately find off-putting and obviously wrong but does not raise any flags for the smart-but-dumb agent.

atarus•5m ago
To clarify you are using the "fast brain, slow brain" pattern? Maybe an example would help.

Broadly speaking, we see people experiment with this architecture a lot often with a great deal of success. A few other approaches would be an agent orchestrator architecture with an intent recognition agent which routes to different sub-agents.

Obviously there are endless cases possible in production and best approach is to build your evals using that data.

What European Union's "Managed Decline" story misses [video]

https://www.youtube.com/watch?v=dMZ5a0lgQKA
1•gessha•28s ago•0 comments

Gamers furious as Brit studio Cloud Imperium admits to data breach

https://www.theregister.com/2026/03/03/brit_games_studio_cloud_imperium/
1•xoxxala•28s ago•0 comments

We Audited 2,857 Agent Skills. 12% Were Malicious

https://grith.ai/blog/agent-skills-supply-chain
1•edf13•35s ago•0 comments

Hermes Agent

https://nousresearch.com/hermes-agent/
1•h34t•2m ago•0 comments

Gemini 3.1 Flash Lite Preview

https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-1-flash-lite
1•k9294•2m ago•1 comments

Show HN: Sai – Your always-on co-worker

https://www.simular.ai/sai
1•pentamassiv•3m ago•0 comments

AI Domains Not Resolving

1•rspijker•3m ago•0 comments

The context window is not your database

https://hornet.dev/blog/the-context-window-is-not-your-database
2•handfuloflight•7m ago•0 comments

LexisNexis confirms data breach as hackers leak stolen files

https://www.bleepingcomputer.com/news/security/lexisnexis-confirms-data-breach-as-hackers-leak-st...
2•ghostoftiber•9m ago•0 comments

Show HN: SysNav – An Intelligent Cockpit for DevOps (Local-First)

https://www.sysnav.ai/
1•sys_ravi•9m ago•0 comments

Show HN: Mind-mem – Zero-infra agent memory with 19 MCP tools (BM25+vector+RRF)

https://github.com/star-ga/mind-mem
1•STARGA•9m ago•1 comments

What Military Drones Can Teach Self-Driving Cars

https://spectrum.ieee.org/military-drones-self-driving-cars
1•oldnetguy•9m ago•0 comments

Book Review: Why Are the Prices So Damn High? (2019)

https://srconstantin.wordpress.com/2019/06/28/book-review-why-are-the-prices-so-damn-high/
2•surprisetalk•9m ago•0 comments

From Fargo to Zebra

https://cendyne.dev/posts/2026-02-27-from-fargo-to-zebra.html
1•surprisetalk•9m ago•0 comments

A New Map of Human Experience [video]

https://www.youtube.com/watch?v=r0QY2_Ej32Q
1•surprisetalk•9m ago•0 comments

Mac Themes Garden

https://macthemes.garden/
1•surprisetalk•9m ago•0 comments

Mesa's KosmicKrisp Vulkan-on-Metal Achieves MoltenVK Feature Parity

https://www.phoronix.com/news/KosmicKrisp-Parity
1•PaulHoule•10m ago•0 comments

How Electrical Engineers Fight a War

https://spectrum.ieee.org/repair-ukraine-power-grid
2•oldnetguy•11m ago•0 comments

From $30 to $3: Building My Own AI Chat Platform

https://www.matthew-hre.com/writing/building-bobrchat
2•matthew_hre•11m ago•0 comments

Gemini 3.1 Flash-Lite Preview

https://ai.google.dev/gemini-api/docs/models/gemini-3.1-flash-lite-preview
2•vincelt•11m ago•0 comments

Barry's Borderpoints

https://barrysborderpoints.com/
1•bookofjoe•11m ago•0 comments

Watershed Moment for AI–Human Collaboration in Math Proof Verification

https://spectrum.ieee.org/ai-proof-verification
1•oldnetguy•11m ago•0 comments

Show HN: I built an AI Agent skill translation and refactoring tool

1•nicholas_pw•12m ago•1 comments

How Many Countries Has the US Bombed Since 2001, and How Much Has It Cost?

https://www.aljazeera.com/news/2026/3/3/how-many-countries-has-the-us-bombed-since-2001-and-how-m...
2•karakoram•12m ago•0 comments

Agent Pro – Automate your desktop from your phone (no setup)

1•ypadamat•12m ago•0 comments

The Longing (1999)

https://www.cluetrain.com/book/longing.html
1•herbertl•13m ago•0 comments

Fed Pricing Reveals Market Expectations About the AI Adoption Pace

https://www.apolloacademy.com/fed-pricing-reveals-market-expectations-about-the-ai-adoption-pace/
1•akyuu•13m ago•0 comments

Show HN: IronCurtain: A secure* runtime for AI agent loops

https://github.com/provos/ironcurtain
1•nielsprovos•13m ago•1 comments

Coding with agents feels like a chess simul

https://tobeva.com/articles/chess-simul/
1•pbw•14m ago•0 comments

Every Electric will pay you to use a battery

https://www.greenjuice.wtf/every-electric/
1•DamonHD•14m ago•0 comments