frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
503•klaussilveira•8h ago•139 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
842•xnx•14h ago•506 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
57•matheusalmeida•1d ago•11 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
166•dmpetrov•9h ago•76 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
166•isitcontent•8h ago•18 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
281•vecti•10h ago•127 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
60•quibono•4d ago•10 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
340•aktau•15h ago•164 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
226•eljojo•11h ago•141 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
422•todsacerdoti•16h ago•221 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
364•lstoll•15h ago•251 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
12•denuoweb•1d ago•0 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
79•SerCe•4h ago•60 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
59•phreda4•8h ago•9 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
16•gmays•3h ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
211•i5heu•11h ago•158 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
123•vmatsiiako•13h ago•51 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
33•gfortaine•6h ago•9 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
160•limoce•3d ago•80 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
258•surprisetalk•3d ago•34 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1020•cdrnsf•18h ago•425 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
52•rescrv•16h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•13 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
95•ray__•5h ago•46 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
81•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
36•betamark•15h ago•29 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
10•denysonique•5h ago•1 comments
Open in hackernews

Thoughts on Evals

https://www.raindrop.ai/blog/thoughts-on-evals/
30•Nischalj10•2mo ago

Comments

CharlieDigital•2mo ago
Both of these are kind of silly and vendors trying to sell you tooling you probably don't need.

In a gold rush, each is trying to sell you a different kind of shovel claiming theirs to be the best when you really should go find a geologist and and figure out where the vein is.

anonymoushn•2mo ago
The framing in this post is really weird. Automated evals can be much more informative than unit tests because the results can be much more fine grained. A/B testing in production is not suitable for determining whether all of one's internal experiments are successful or not.

I don't doubt that Raindrop's product is worthwhile to model vendors, but the post seems like its audience is C suite folks who have no clue how anything works. Do their most important customers even have any of these?

CharlieDigital•2mo ago
I think in most cases, outside of pure AI providers or think AI wrappers, almost every team will realize more gains from focusing on their user domains and solving business problems versus fine tuning their prompts to eek out a 5% improvement here and there.
basket_horse•2mo ago
I don’t think you can use this as a blanket statement. For many use cases the last 5-10% is the difference between demoware and production.
CharlieDigital•2mo ago
If that were true, just switching to TOON would make your startup take off.

That is obviously not true because a 5% gain in LLM performance isn't going to make up for a bad product.

gk1•2mo ago
Founder of a/b testing company accuses founder of evals company of misrepresenting how a/b tests are used in practice, then concludes by misrepresenting how evals are used in practice: "Or you can write 10,000,000 evals."

Could've easily been framed as "you need both evals and a/b testing," but instead they chose this route which comes across as defensive, disingenuous, and desperate.

BTW, if a competitor ever writes a whole post to refute something you barely alluded to without even mentioning their name... congratulations, you've won.

basket_horse•2mo ago
Agree. This whole post comes across as sales rather than the truth that both are useful for different things
eitland•2mo ago
This scared me until I realized it is about raindrop.ai, not raindrop.io.

(Raindrop.io is a bookmark service that AFAIK has "take money from people and stores their bookmarks" as its complete business model.)

koakuma-chan•2mo ago
> Intentionally or not, the word "eval" has become increasingly vague. I've seen at least 6 distinct definitions of evals

This. I am so tired of people saying evals without defining what they mean. And now even management is asking me for evals and why we are not fine tuning.

esafak•2mo ago
No, thanks. Just use evals with error bars. If you can't get error bars, use an A/B test to detect spuriousness and evals.
gregsadetsky•2mo ago
I'm new/uninformed in this world, but I have an idea for an eval that I think has not been tried yet.

Can anyone direct me towards how to ... make one? At the most fundamental level, is it about having test questions with known, golden (verified, valid) answers, and asking different LLM models to find the answer, and comparing scores (how many were found to be correct)?

What are "obvious" things that are important to get right - temperature set to 0? At least ~10 or 20 attempts at the same problem for each llm? What are non-obvious gotchas?

Finally, any known/commonly used frameworks to do this, or any tooling that can call different LLMs would be enough?

Thanks!

koakuma-chan•2mo ago
> Can anyone direct me towards how to ... make one?

https://hamel.dev/blog/posts/evals/

> What are "obvious" things that are important to get right - temperature set to 0? At least ~10 or 20 attempts at the same problem for each llm?

LLMs are actually pretty deterministic, so there is no need to do more than one attempt with the exact same data.

> Finally, any known/commonly used frameworks to do this, or any tooling that can call different LLMs would be enough?

https://github.com/vercel/ai

https://github.com/mattpocock/evalite

gregsadetsky•2mo ago
I'm very grateful! Thanks a lot
ncgl•2mo ago
"LLMs are actually pretty deterministic, so there is no need to do more than one attempt with the exact same data."

Is this true? I remember there being a randomization factor in weighing tokens to make the output more something, dont recall what

Obviously I'm not an Ai dev

koakuma-chan•2mo ago
In my experience, the response may not be exactly the same, but the difference is negligible.
moltar•2mo ago
Take a look at promptfoo