frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Launch HN: Voker (YC S24) – Analytics for AI Agents

https://voker.ai
12•ttpost•1h ago
Hey HN, we're Alex and Tyler, co-founders of Voker.ai (https://voker.ai/), an agent analytics platform for AI product teams. Voker gives full visibility into what users are asking of your agents, and whether your agents are delivering, without having to dig through logs. Our main product is a lightweight SDK that is LLM stack agnostic and purpose-built for agent products. (https://app.voker.ai/docs)

Agent Engineers and AI product teams don’t have the right level of visibility into agent performance in production, which results in bad user experiences, churn, and hundreds of hours wasted with spot checks to find and debug issues with agent configurations.

Demo: https://www.tella.tv/video/vid_cmoukcsk1000i07jgb4j65u67/vie...

We recently conducted a survey of YC Founders and 90%+ of respondents said that the only way they know if their Agents are failing users in production is by hearing complaints from customers. They push a prompt change hoping that it fixes the problem and doesn’t break something somewhere else, and the cycle repeats.

We saw tons of observability and evals products popping up to try to address these problems, but we still felt like something was missing in the agent monitoring stack. Obs is good for individual trace debugging but is only accessible to engineers. Evals are good for testing known issues, but don't give insights into trends that teams don’t expect, so engineers are always playing catch up. Traditional product analytics tools do a good job tracking clicks and pageviews across your product surface but weren’t built ground up for agent products. Knowing what users want out of agents, and whether the agent delivered requires specific conversational intelligence / unstructured data processing techniques.

We came up with the agent analytics primitives of Intents, Corrections, and Resolutions to describe something pretty much all conversational agents had in common: a user will always come to an agent with an intent, the user might have to correct this agent on the way to getting their intent resolved, and hopefully every intent a user has is eventually resolved by the agent. Voker processes LLM calls by automatically annotating individual conversations and picking out user intent and corrections. Voker takes these and uses LLMs and hierarchical text classification to create dynamic categories that give higher level insights so you don’t have to read individual conversations to know what are the main usage patterns across your users.

The most common substitute solution we’ve seen is uploading obs logs to Claude or ChatGPT and asking for summary insights. There are a few problems with this - mainly that LLMs aren’t good at math or data science, so you don’t get accurate or consistent statistics. Its highly likely that the LLM overfits to some insights and underfits to others. The LLM isn’t programmatically reading and classifying each individual session or interaction. This is why we don’t use LLMs for any of our core data engineering (processing events, calculating statistics) so the analytics we produce are consistent, reproducible, and accurate. We have a publicly available, lightweight SDK that wraps LLM calls to OpenAI, Anthropic and Gemini in Python and Typescript. Voker handles the data engineering to turn raw data into usable analytics primitives and higher level insights. Free tier: 2,000 events / mo, requires email signup. Paid plans start at $80/mo with a 30 day free trial.

We'd love to hear how you're currently detecting trends, and if you try Voker, tell us what part of our analysis is valuable, and what still feels missing. Thanks for reading, and we’re looking forward to your thoughts in the comments!

Comments

akslp2080•37m ago
How is it different than Langfuse? sorry if I am off the track but Langfuse also provides some detailed tracing of agentic behavior and decisions.
ttpost•25m ago
We get this question a lot! We work hand-in-hand with obs tools like Langfuse. Langfuse is great for debugging technical issues on individual traces like timing conditions that resulted in failed API calls.

Voker focuses on product, business and user outcomes - like what intents did the user bring to your agent that you might not expect. We're built for the whole product team, whereas Langfuse focuses on engineers specifically.

One way to think about it would be: a PM notices in Voker that a new intent category is coming up frequently and the agent isn't handling it well. The PM can dig into the data with visualizations or our conversation reconstructions. Once they confirm its a real issue worth addressing, they can link their investigation to the AI engineer - who can use Voker AND Langfuse to debug and implement a fix/improvement.

Ozzie_osman•26m ago
If the team is here, would love to understand how it compares to something like Amplitude's agent analytics (https://amplitude.com/ai-agents).
ttpost•20m ago
Yeah, this is a confusing one on wording. TLDR: Amplitude is analytics for your web/product data, Voker is analytics for your agent data.

We call Amplitude's feature an "AI Analyst". Essentially Amplitude is layering a LLM copilot on top of their own product - so you don't have to click the buttons or write reports to get insights.

We're an analytics platform built for tracking your agents. Different products with different problems they're solving.

Not sure if this helps, but essentially Amplitude could use Voker to track how well their AI Analyst agent product is actually working!

Damianf19•12m ago
What's the data model that lets you compare agents that differ a lot in tools/policies? Curious if you normalize on the "what did the user actually accomplish" layer or on raw token/turn metrics, because the two paint completely different pictures of "is this agent working." We struggle with this on the eval side of our own product (email pipeline outcomes, not agents, but same shape).
ggamecrazy•11m ago
> High interaction volume (1k+ chat sessions per month)

I don't mean to be that typical HN commenter but you did lose me a bit there.

I know a lot of people are just getting started with agents but even for a lot of scrappy startups usage is a lot higher than that!

If I may suggest focusing on explaining how you can add value even when usage is super low to controlling costs even when usage can get super high?

I can validate you that it is a true problem that's solved by large companies but you have to hand-roll yourself @ startups (via airflow or queues, etc). But unfortunately one where I am not sure that a lot of stakeholders understand the benefits of (yet!). I think value has to be shown a bit more clearly here, sadly.

Congrats on the launch!

Rendering the Sky, Sunsets, and Planets

https://blog.maximeheckel.com/posts/on-rendering-the-sky-sunsets-and-planets/
212•ibobev•3h ago•16 comments

Bambu Lab is abusing the open source social contract

https://www.jeffgeerling.com/blog/2026/bambu-lab-abusing-open-source-social-contract/
470•rubenbe•2h ago•167 comments

Learning Software Architecture

https://matklad.github.io/2026/05/12/software-architecture.html
397•surprisetalk•7h ago•74 comments

The Future of Obsidian Plugins

https://obsidian.md/blog/future-of-plugins/
41•xz18r•1h ago•14 comments

eBay Rejects GameStop's $56B Takeover as Not Credible

https://www.bloomberg.com/news/articles/2026-05-12/ebay-rejects-gamestop-s-56-billion-takeover-as...
91•voisin•1h ago•69 comments

Launch HN: Voker (YC S24) – Analytics for AI Agents

https://voker.ai
12•ttpost•1h ago•6 comments

Screenshots of Old Desktop OSes

http://www.typewritten.org/Media/
534•adunk•11h ago•262 comments

Amazon employees are "tokenmaxxing" due to pressure to use AI tools

https://arstechnica.com/ai/2026/05/amazon-employees-are-tokenmaxxing-due-to-pressure-to-use-ai-to...
37•Bender•28m ago•13 comments

Postmortem: TanStack NPM supply-chain compromise

https://tanstack.com/blog/npm-supply-chain-compromise-postmortem
994•varunsharma07•19h ago•420 comments

Profiling.sampling – Statistical Profiler

https://docs.python.org/3.15/library/profiling.sampling.html#module-profiling.sampling
64•djoldman•2d ago•18 comments

EU to crack down on TikTok, Instagram's 'addictive design' targeting kids

https://www.cnbc.com/2026/05/12/tiktok-instagram-social-media-addictive-eu-crack-down.html
378•thm•5h ago•319 comments

They Live (1988) inspired Adblocker

https://github.com/davmlaw/they_live_adblocker
479•tokenburner•16h ago•151 comments

The Real Story of Troy

https://storica.club/blog/troy-was-real/
4•cemsakarya•2d ago•1 comments

The Surprisingly Long Life of the Vacuum Tube

https://www.construction-physics.com/p/the-surprisingly-long-life-of-the
29•surprisetalk•1d ago•13 comments

If AI writes your code, why use Python?

https://medium.com/@NMitchem/if-ai-writes-your-code-why-use-python-bf8c4ba1a055
754•indigodaddy•20h ago•784 comments

Text Blaze (YC W21) Is Hiring for a No-AI Summer Internship

https://www.ycombinator.com/companies/text-blaze/jobs/P4CCN62-the-blaze-no-ai-summer-internship
1•scottfr•4h ago

Chasing Chicago's movable bridges (2014)

https://aresluna.org/seesaws-for-giants/
55•NaOH•2d ago•8 comments

Analysis points to a unexpected cause of reading difficulties

https://phys.org/news/2026-05-years-struggles-obvious-massive-analysis.html
18•wglb•2d ago•24 comments

UCLA discovers first stroke rehabilitation drug to repair brain damage (2025)

https://stemcell.ucla.edu/news/ucla-discovers-first-stroke-rehabilitation-drug-repair-brain-damage
411•bookofjoe•23h ago•81 comments

Through the looking glass of benchmark hacking

https://poolside.ai/blog/through-the-looking-glass
16•jxmorris12•19h ago•8 comments

UnDUNE II

https://liquidream.itch.io/undune2
109•tosh•4h ago•22 comments

Extremely Low Frequencies

https://computer.rip/2026-05-09-extremely-low-frequencies.html
167•pinewurst•12h ago•14 comments

Coursera and Udemy are now one company

https://blog.coursera.org/coursera-and-udemy-are-now-one-company-creating-the-worlds-most-compreh...
146•Anon84•6h ago•62 comments

Docker images are hundreds of MB; a full game engine compiles to 35MB WASM

https://bogomolov.work/blog/posts/wasm-vs-docker/
50•theanonymousone•3d ago•55 comments

Why senior developers fail to communicate their expertise

https://www.nair.sh/guides-and-opinions/communicating-your-expertise/why-senior-developers-fail-t...
3•nilirl•1h ago•0 comments

Software Internals Book Club

https://eatonphil.com/bookclub.html
166•aragonite•14h ago•27 comments

Claude Platform on AWS

https://claude.com/blog/claude-platform-on-aws
206•matrixhelix•15h ago•86 comments

I let AI build a tool to help me figure out what was waking me up at night

https://martin.sh/i-let-ai-build-a-tool-to-help-me-figure-out-what-was-waking-me-up-at-night/
254•showmypost•19h ago•260 comments

I hate soldering

https://user8.bearblog.dev/rant/
213•James72689•4d ago•170 comments

A lost ancient script reveals how writing as we know it began

https://www.newscientist.com/article/2524042-a-lost-ancient-script-reveals-how-writing-as-we-know...
84•emot•4d ago•54 comments