frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
418•klaussilveira•5h ago•94 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
770•xnx•11h ago•465 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
137•isitcontent•5h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
131•dmpetrov•6h ago•54 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
37•quibono•4d ago•2 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
241•vecti•8h ago•116 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
63•jnord•3d ago•4 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
309•aktau•12h ago•153 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
309•ostacke•11h ago•84 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
168•eljojo•8h ago•124 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
38•SerCe•1h ago•34 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
391•todsacerdoti•13h ago•217 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
314•lstoll•12h ago•230 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
48•phreda4•5h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
107•vmatsiiako•10h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
181•i5heu•8h ago•128 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
233•surprisetalk•3d ago•30 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
14•gfortaine•3h ago•0 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
971•cdrnsf•15h ago•414 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
141•limoce•3d ago•79 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
40•rescrv•13h ago•17 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
8•kmm•4d ago•0 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
42•ray__•2h ago•11 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
34•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
76•antves•1d ago•57 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
18•MarlonPro•3d ago•4 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
38•nwparker•1d ago•9 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
102•coloneltcb•2d ago•69 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
25•betamark•12h ago•23 comments

Planetary Roller Screws

https://www.humanityslastmachine.com/#planetary-roller-screws
36•everlier•3d ago•8 comments
Open in hackernews

Caveat Prompter

https://surfingcomplexity.blog/2025/10/12/caveat-promptor/
12•azhenley•3mo ago

Comments

fluxusars•3mo ago
This is no different from reviewing code from actual humans: someone could have written great looking code with excellent test coverage and still have missed a crucial edge case or obvious requirement. In the case of humans, there's obvious limits and approaches to scaling up. With LLMs, who knows where they will go in the next couple of years.
greatgib•3mo ago
It is, because a human would have used "thinking" to create this piece of code. There could be error and mistakes but at least you know there there is a human logic behind and you just have to check for things that can easy mistakes for a human.

With AI, in the current state at least, there is no global logic involved with the whole thing that was created. It is a random set of probabilities that generated a somehow valid code. But there is not a global "view" about it that it makes sense.

So when reviewing, you will basically have to do in your head the same mental process as would have done an original human contributor to check that the whole thing makes sense in the first place.

Worst than that, when reviewing such change, you should imagine that the AI probably generated a few invalid versions on the code and randomly iterated until there is something that passed the "linter" definition of valid code.

ofrzeta•3mo ago
Even that screenshot is bogus. When there's no understanding there can be no misunderstanding either. It's misleading to treat the LLM like there is understanding (and for the LLMs themselves to claim they do, although this anthropomorphization is part of their success). It's like asking the LLM "do you know about X?" It just makes no sense.
satisfice•3mo ago
In order to get the full benefit of AI we must apply it irresponsibly.

That’s what it boils down to.

stavros•3mo ago
Which has also always been the same with people.
satisfice•3mo ago
That’s what AI fanboys say every single time I make this point. But “it’s the same for humans” argument only works if you are referring to little children.

Indeed, my airline pilot brother once told me that a carefully supervised 7 year old could fly an airliner safely, as long as there was no in-flight emergency.

And indeed hiring children, who are not accountable for their behavior, does create a supervision problem that can easily exceed the value you may get, for many kinds of work.

I can’t trust AI the way I can trust qualified adults.

stavros•3mo ago
Well, you employ different adults than I do, then. Every person I know (including me) can be either thorough, or fast, as the post says, and there's no way to get both.
phinnaeus•3mo ago
At first I thought this was a typo, but actually I fully agree with this. If we use LLMs (in their current state) responsibly we won’t see much benefit, because the weight of that responsibility is roughly equivalent to the cost of doing the task without any assistance.