frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The "AI Vulnerability Storm": Building a "Mythos [pdf]

https://labs.cloudsecurityalliance.org/wp-content/uploads/2026/04/mythosreadyv9.pdf
1•_tk_•18s ago•0 comments

Show HN: Send physical postcards from your coding harness

https://api.melonpost.com/SKILL.md
1•thevelop•53s ago•0 comments

Which country can claim steak?

https://www.bbc.com/travel/article/20260402-which-country-can-claim-steak
1•Cider9986•1m ago•0 comments

Sindarov Wins Candidates with Round to Spare

https://www.chess.com/news/view/2026-fide-candidates-tournament-round-13
1•FergusArgyll•2m ago•0 comments

Chinese Electrotech Is the Big Winner in the Iran War

https://paulkrugman.substack.com/p/chinese-electrotech-is-the-big-winner
1•dxs•2m ago•0 comments

Who sent you? – The agent identity crisis

https://highflame.com/blogs/who-sent-you-solving-the-agent-identity-crisis
2•jalbrethsen•4m ago•0 comments

Show HN: Monitor AWS activity and security events using CloudTrail

https://github.com/cloudwatcher-dev/cloudwatcher-aws-cloudformation
1•henriklipp•5m ago•0 comments

Court rules X must give privacy researcher access to personal data

https://nltimes.nl/2026/04/14/court-rules-x-must-give-privacy-researcher-access-personal-data-pri...
1•Kiala•5m ago•0 comments

What Would You See Changed in Haskell?

https://blog.haskell.org/what-would-you-see-changed-in-haskell/
2•birdculture•9m ago•0 comments

Why, After All These Years, MZI-Based Transistorlessness Might Finally Be Here

https://write.as/mnggfj7asl07k
1•aniijbod•10m ago•0 comments

Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills

https://skillmd.discourse.group/
1•0gs•11m ago•1 comments

Poking at AttnRes with NanoGPT

https://axu.sh/post/attention-residuals
2•abhiux•11m ago•0 comments

AI is flattening who we uniquely are

https://twitter.com/heyohelen/status/2044126575399186565
1•trovewithin•12m ago•0 comments

Truth Machine: PW Talks with Kevin Hartnett

https://www.publishersweekly.com/pw/by-topic/authors/interviews/article/100085-truth-machine-pw-t...
1•digital55•13m ago•0 comments

Finding unusual machines in network scans

https://xn--mbius-jua.band/blog/nmapview/
2•gebgebgeb•13m ago•0 comments

Nvidia slaps forehead: I know what quantum is missing – it's AI

https://www.theregister.com/2026/04/14/nvidia_ai_quantum_computing/
1•blackcoffeerain•14m ago•0 comments

AI platform that audits websites daily and tracks competitor SEO

https://arlocmo.site
1•decentrowe•14m ago•0 comments

Fake Linux leader using Slack to con devs into giving up their secrets

https://www.theregister.com/2026/04/13/linux_foundation_social_engineering/
1•blackcoffeerain•16m ago•0 comments

The cost of building a workflow editor on React Flow

https://www.workflowbuilder.io/blog/build-vs-buy-workflow-editor-hidden-cost-react-flow
5•maciek996•16m ago•0 comments

Why Amazon Is Buying Starlink Rival Globalstar in $11B Deal

https://www.wsj.com/tech/amazon-to-acquire-globalstar-in-satellite-cellular-connection-push-448d5a16
3•JumpCrisscross•16m ago•0 comments

I built a free SSH relay for homelab machines behind CGNAT

1•rasengan•19m ago•0 comments

Show HN: Tip for users with Samsung Galaxy S7 with broken display

1•ike____________•20m ago•0 comments

iPhone Ultra – First Look [video]

https://www.youtube.com/watch?v=f7UA1Hmg53Q
2•quadrige•23m ago•0 comments

Coreboot Comes to AMD Ryzen Powered Star Labs StarBook MK VI After 3 Year Wait

https://www.phoronix.com/news/Coreboot-StarBook-MK-VI
1•Bender•25m ago•0 comments

Show HN: Website Is a Video Game

https://run-labs.com/
1•mnewme•25m ago•0 comments

Figma Design to Code, Code to Design: Clearly Explained

https://blog.bytebytego.com/p/figma-design-to-code-code-to-design
3•edbentley•26m ago•0 comments

GitHub's Fake Star Economy

https://awesomeagents.ai/news/github-fake-stars-investigation/
2•ajayvk•26m ago•3 comments

Autonomous Robot Brigade Successfully Retook Russian Positions in Ukraine

https://www.thetimes.com/world/russia-ukraine-war/article/ukraine-robot-army-war-russia-surrender...
3•alephnerd•26m ago•0 comments

Best prompt database for AEs.100% free

https://crushquota.ai/
1•yous587•26m ago•1 comments

Stop re-entering AI credentials in every app and agent

https://21pins.com/
1•cqwww•26m ago•1 comments
Open in hackernews

Show HN: Kelet – Root Cause Analysis agent for your LLM apps

https://kelet.ai/
35•almogbaku•3h ago
I've spent the past few years building 50+ AI agents in prod (some reached 1M+ sessions/day), and the hardest part was never building them — it was figuring out why they fail.

AI agents don't crash. They just quietly give wrong answers. You end up scrolling through traces one by one, trying to find a pattern across hundreds of sessions.

Kelet automates that investigation. Here's how it works:

1. You connect your traces and signals (user feedback, edits, clicks, sentiment, LLM-as-a-judge, etc.) 2. Kelet processes those signals and extracts facts about each session 3. It forms hypotheses about what went wrong in each case 4. It clusters similar hypotheses across sessions and investigates them together 5. It surfaces a root cause with a suggested fix you can review and apply

The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.

The fastest way to integrate is through the Kelet Skill for coding agents — it scans your codebase, discovers where signals should be collected, and sets everything up for you. There are also Python and TypeScript SDKs if you prefer manual setup.

It’s currently free during beta. No credit card required. Docs: https://kelet.ai/docs/

I'd love feedback on the approach, especially from anyone running agents in prod. Does automating the manual error analysis sound right?

Comments

yanovskishai•3h ago
I imagine it's hard to create a very generic tool for this usecase - what are the supported frameworks/libs, what does this tool assume about my implementation ?
BlueHotDog2•2h ago
nice. what a crazy space. how is this different vs other telemetry/analysis platforms such as langchain/braintrust etc?
almogbaku•48m ago
Hi @BlueHotDog2, OP here

langsmith/langfuse/braintrust collect traces, and then YOU need to look at them and analyze them(error analysis/RCA).

Kelet do that for you :)

Does that make any sense? If not, please tell me, I'm still trying to figure out how to explain that, lol.

halflife•2h ago
Kelet as in קלט as in input?
almogbaku•53m ago
Hi @halflife, OP here

YEP, Good catch! Kelet as input/prompt in Hebrew :)

hadifrt20•2h ago
in the auickstart, the suggested fixes are called "Prompt Patches" .. does that mean Kelet only surfaces root causes that are fixable in the prompt? What happens when the real bug is in tool selection or retrieval ranking for example?
almogbaku•54m ago
Hi @hadifrt20, OP here

From what we discovered analyzing ~33K+ sessions, most of the time when the agent selects the wrong tool, it's because the tool's description (i.e., prompt) was not good enough, or there's a missing nuance here.

That goes exactly under Kelet's scope :)

dwb•2h ago
> The key insight

I'm so tired

hmokiguess•2h ago
Hahahahahahahahahhaa ngl, your comment killed me, some LLM tells are so funny
whythismatters•1h ago
Sadly, they forgot to mention why this matters
almogbaku•1h ago
hey @dwb, OP here

Yes. I definitely assisted LLM in writing it. Yeah - I should have stripped it better.

Yet it's f*ing painful to do error analysis and go through thousands of traces. Hope you can live with my human mistakes

system16•53m ago
Also the obligatory “It’s not A. It’s B.”
trannnnun•2h ago
jkfrntgijbntbuijhb8ybu
RoiTabach•2h ago
This looks Amazing Do you have a LiteLLM integration?
almogbaku•47m ago
Hi @RoiTabach, OP here

Yep. We can integrate with every solution that supports OpenTelemetry :) so it's pretty native, just use the integration skill

npx skills add kelet-ai/skills

peter_parker•2h ago
> They just quietly give wrong answers. It's not about wrong answers only. They just stuck in a circle sometimes.
jldugger•1h ago
Every six months or so, someone at work does a hackathon project to automate outage analysis work SRE would likely perform. And every one of them I've seen has been underwhelming and wrong.

There's like three reasons for this disconnect.

1. The agents aren't expert at your proprietary code. They can read logs and traces and make educated guesses, but there's no world model of your code in there.

2. The people building these apps are unqualified to review the output. I used to mock narcissists evaluating ChatGPT quality by asking it for their own biography, but they're at least using a domain they are an expert in. Your average MLE has no profound truths about kubernetes or the app. At best, they're using some toy "known broken" app to demonstrate under what are basically ideal conditions, but part of the holdout set should be new outages in your app.

3. SREs themselves are not so great at causal analysis. Many junior SRE take the "it worked last time" approach, but this embeds a presumption that whatever went wrong "last time" hasn't been fixed in code. Your typical senior SRE takes a "what changed?" approach, which is depressingly effective (as it indicates most outages are caused by coworkers). At the highest echelons, I've seen research papers examining meta-stablity and granger causality networks, but I'm pretty sure nobody in SRE or these RCA agents can explain what they mean.

> The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.

My own insight is mostly bayesian. Typical applications have redundancy of some kind, and you can extract useful signals by separating "good" from "bad". A simple bayesian score of (100+bad)/(100+good) does a relatively good job of removing the "oh that error log always happens" signals. There's also likely a path using clickhouse level data and bayesian causal networks, but the problem is traditional bayesian networks are hand crafted by humans.

So yea, you can ask an LLM for 100 guesses and do some kind of k-means clustering on them, but you can probably do a better job doing dimensional analysis first and passing that on to the agent.

almogbaku•57m ago
Hi @jldugger

Great points, but I think there's a domain confusion here . You're describing infra/code RCA. Kelet does an AI agent Quality RCA — the agent returns a 200 OK, but gives the wrong answer.

The signal space is different. We're working with structured LLM traces + explicit quality signals (thumbs down, edits, eval scores), not distributed system logs. Much more tractable.

Your Bayesian point actually resonates — separating good from bad sessions and looking for structural differences is close to what we do. But the hypotheses aren't "100 LLM guesses + k-means." Each one is grounded in actual session data: what the user asked, what the agent did, what came back, and what the signal was.

Curious about the dimensional analysis point — are you thinking about reducing the feature space before hypothesis generation?