frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•28s ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
1•onurkanbkrc•1m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•1m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•4m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•7m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•7m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•7m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•7m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
2•juujian•9m ago•0 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•11m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•13m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•16m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•16m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•16m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•19m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•22m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•25m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•25m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•27m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•27m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•31m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•33m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•36m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•37m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•42m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•44m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•47m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•47m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•48m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•53m ago•0 comments
Open in hackernews

Show HN: How to fix AI Agents at the component level

https://ubiai.tools/building-observable-and-reliable-ai-agents-using-langgraph-langsmith-and-ubiai/
1•Mesterniz•1mo ago

Comments

Mesterniz•1mo ago
I wanted to share some hard-learned lessons about deploying multi-component AI agents to production. If you've ever had an agent fail mysteriously in production while working perfectly in dev, this might help.

The Core Problem

Most agent failures are silent. Most failures occur in components that showed zero issues during testing. Why? Because we treat agents as black boxes - query goes in, response comes out, and we have no idea what happened in between.

The Solution: Component-Level Instrumentation

I built a fully observable agent using LangGraph + LangSmith that tracks:

Component execution flow (router → retriever → reasoner → generator)

Component-specific latency (which component is the bottleneck?)

Intermediate states (what was retrieved, what reasoning strategy was chosen)

Failure attribution (which specific component caused the bad output?)

Key Architecture Insights

The agent has 4 specialized components:

Router: Classifies intent and determines workflow

Retriever: Fetches relevant context from knowledge base

Reasoner: Plans response strategy

Generator: Produces final output

Each component can fail independently, and each requires different fixes. A wrong answer could be routing errors, retrieval failures, or generation hallucinations - aggregate metrics won't tell you which.

To fix this, I implemented automated failure classification into 6 primary categories:

Routing failures (wrong workflow)

Retrieval failures (missed relevant docs)

Reasoning failures (wrong strategy)

Generation failures (poor output despite good inputs)

Latency failures (exceeds SLA)

Degradation failures (quality decreases over time)

The system automatically attributes failures to specific components based on observability data.

Component Fine-tuning Matters

Here's what made a difference: fine-tune individual components, not the whole system.

When my baseline showed the generator had a 40% failure rate, I:

Collected examples where it failed

Created training data showing correct outputs

Fine-tuned ONLY the generator

Swapped it into the agent graph

Results: Faster iteration (minutes vs hours), better debuggability (know exactly what changed), more maintainable (evolve components independently).

For anyone interested in the tech stack, here is some info:

LangGraph: Agent orchestration with explicit state transitions

LangSmith: Distributed tracing and observability

UBIAI: Component-level fine-tuning (prompt optimization → weight training)

ChromaDB: Vector store for retrieval

Key Takeaway

You can't improve what you can't measure, and you can't measure what you don't instrument.

The full implementation shows how to build this for customer support agents, but the principles apply to any multi-component architecture.

Happy to answer questions about the implementation. The blog with code is in the comment.