frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•34s ago•0 comments

The AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
1•geox•3m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•3m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
1•jerpint•3m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•5m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•8m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•8m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•10m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•12m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•12m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•12m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•13m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•13m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•14m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•16m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•19m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•19m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•21m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•21m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•25m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•28m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•29m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•30m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•30m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•31m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•33m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•34m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•36m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•37m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•37m ago•0 comments
Open in hackernews

Principles for production AI agents

https://www.app.build/blog/six-principles-production-ai-agents
128•carlotasoto•6mo ago

Comments

carlotasoto•6mo ago
Practical lessons from building production agentic systems
roadside_picnic•6mo ago
Did we just give up on evaluations these days?

Over, and over again my experience building production AI tools/systems has been that evaluations are vital for improving performance.

I've also see a lot of people proposing some variation of "LLM as critic" as a solution to this, but I've never seen empirical evidence that this works. Further more, I've worked with a pretty well respected researcher in this space and in our internal experiment we found that LLMs where not good critics.

Results are always changing, so I'm very open to the possibility that someone has successfully figured out how to use "LLM as critic" but without the foundations of some basic evals to compare by, I remain skeptical.

Aurornis•6mo ago
Evals are a core part of any up to date LLM team. If some team was just winging it without robust eval practices they’re not to be trusted.

> Further more, I've worked with a pretty well respected researcher in this space and in our internal experiment we found that LLMs where not good critics

This is an idea that seems so obvious in retrospect, after using LLMs and getting so many flattering responses telling us we’re right and complementing our inputs.

For what it’s worth, I’ve heard from some people who said they were getting better results by intentionally using different LLM models for the eval portion. Feels like having a model in the same family evaluate its own output triggers too many false positives.

Uehreka•6mo ago
I once asked Claude Code (Opus 4) to review a codebase I’d built, and threw in at the end of my prompt something like “No need to be nice about it.”

Now granted, you could say it was “flattering that instruction”, but it sure didn’t flatter me. It absolutely eviscerated my code, calling out numerous security issues (which were real), all manner of code smells and bad architectural decisions, and ended by saying that the codebase appeared to have been thrown together in a rush with no mind toward future maintenance (which was… half true… maybe more true than I’d like to admit).

All this to say that it is far from obvious that LLMs are intrinsically bad critics.

Herring•6mo ago
I have an idea. What if we used a third LLM to evaluate how good the secondary LLM is at critiquing the primary LLM.
colonCapitalDee•6mo ago
The problem isn't that LLMs can't be critical, it's that LLMs don't have taste. It's easy to get an LLM to give praise, and it's easy to get an LLM to give criticism, but getting an LLM to praise good things and criticize bad things is currently impossible for non-trival inputs. That's not say that prompting your LLM to generate criticism is useless, it's just that any LLM prompted to generate criticism is going to criticize things are that actually fine, just like how an LLM prompted to generate praise (which is effectively the default behavior) is going to praise things that are deeply not fine.
bubblyworld•6mo ago
Absolutely matches my experience - it can still be super helpful, but AI have an extreme version of an anchoring bias.
jauhar_•6mo ago
Another issue is that the behaviour of the LLMs is not very consistent.
sudhirb•6mo ago
For coding agents, evaluations are tricky - thorough evaluation tasks tend to be slow and/or expensive and/or display a high degree of variance over N attempts. You could run a whole benchmark like SWE Bench or Terminal Bench against a coding agent on every change but it quickly becomes infeasible.
roadside_picnic•6mo ago
I used to own the eval suite for a coding agent, it's certainly doable, even when it requires SQL + tables etc. We even had support for a wide range of data options ranging from canned csv data to plugging into prod to simulate the user experience, all easily configurable at eval run time. It also supported agentic flows where the results from one eval could be chained to the next (with a known correct answer being an optional send to check the framework end to end in the case of node failure).

Interestingly enough, we started with hundreds of evals, but after that experience my advice has become: less evals tied more closely to specific features and product ambitions.

By that I mean: some evals should serve as a warning ("uh oh, that eval failed, don't push to prod"), others as a mile stone ("woohoo! we got it work!"), and all should be informed by the product road map. You basically should understand where the product is going just by looking over the eval suite.

And, if you don't have evals, you really don't know if you're moving the needle at all. There were multiple situations where a tweak to a prompt passed an initial vibe check, but when run against the full eval suite, clearly performed worse.

The other piece of advice would be: evals don't have to sophisticated, just repeatable and agnostic to who's running them. Heck even "vibe checks" can be good evals, if they're written down and they need to pass some consensus among multiple people around whether they passed or not.

criemen•6mo ago
Running evals aren't the problem, the problem is acquiring or building a high-quality, non-contaminated dataset.

https://arxiv.org/abs/2506.12286 makes a very compelling case that swebench (and in extension, anything that's based on public source code) is most likely overestimating your agents actual capabilities.

simonw•6mo ago
This is the best guide I've seen to the LLM-as-judge pattern: https://hamel.dev/blog/posts/llm-judge/index.html
glial•6mo ago
This is fantastic, thank you for sharing.
edmundsauto•6mo ago
Hamel has a ton of great and free content on YouTube. He and Shreya Shankar are a breath of fresh air.
abhgh•6mo ago
Evals somehow seem to be very very underrated, which is concerning in a world where we are moving towards (or trying to) systems with more autonomy.

Your skepticism of "llm-as-a-judge" setups is spot on. If your LLM can make mistakes/hallucinate, then of course, your judge llm can too. In practice, you need to validate your judges and possibly adapt to your task based on sample annotated data. You might adapt them by trial and error, or prompt optimization, e.g., using DSPy [1], or learning a small correction model on top of their outputs, e.g., LLM-Rubric [2] or Prediction Powered Inference [3].

In the end, using the LLM as a judge confers just these benefits:

1. It is easy to express complex evaluation criteria. This does not guarantee correctness.

2. Seen as a model, it is easy to "train", i.e., you get all the benefits of in-context learning, e.g., prompt based, few-shot.

But you still need to evaluate and adapt them. I have notes from a NeurIPS workshop from last year [4]. Btw, love your username!

[1]https://dspy.ai/

[2]https://aclanthology.org/2024.acl-long.745/

[3]https://www.youtube.com/watch?v=TlFpVpFx7JY

[4] https://blog.quipu-strands.com/eval-llms

prats226•6mo ago
I see that in tool calling, we usually specify just the inputs to functions and not what typed output is expected from function.

In DSL style agents, giving LLMs info about what structured inputs are needed to call functions as well as what are outputs expected would probably result in better planning?

SrslyJosh•6mo ago
"Don't."
lacoolj•6mo ago
Always hard to take an article seriously when it has typos, some of which are repeated ("promt" in the graphic on Principle 2)
henriquegodoy•6mo ago
I've been tinkering with agentic systems for a while now, and this post nails some key pain points that hit close to home. The emphasis on splitting context and designing tight feedback loops feels spot on—I've seen agents go off the rails without them, hallucinating solutions because the prompt was too bloated or the validation was half-baked. It's like building a machine where every part needs to click just right, or else you're debugging forever.

What really resonates is the bit about frustrating behaviors signaling deeper system issues, not just model quirks. In my own experiments, I've had agents stubbornly ignore tools because I forgot to expose the right APIs, and it made me rethink how we treat these as "intelligent" when they're really just following our flawed setups. It pushes us toward more robust orchestration, where humans handle the high-level intentions and AI fills in the execution gaps seamlessly.

This ties into broader ideas on how AI interfaces will evolve as models get smarter. I extrapolate more of this thinking and dive deeper into human–AI interfaces on my blog if anyone’s interested in checking it out: https://henriquegodoy.com/blog/stream-of-consciousness