frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•2m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•2m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•3m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•3m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•5m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•6m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•6m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•6m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•7m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•7m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•8m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•8m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•10m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•10m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•16m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•17m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•18m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•20m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•20m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•21m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•21m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
5•samasblack•23m ago•2 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•24m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•25m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•26m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•28m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•28m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•28m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•29m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•29m ago•0 comments
Open in hackernews

Caveat Prompter

https://surfingcomplexity.blog/2025/10/12/caveat-promptor/
12•azhenley•3mo ago

Comments

fluxusars•3mo ago
This is no different from reviewing code from actual humans: someone could have written great looking code with excellent test coverage and still have missed a crucial edge case or obvious requirement. In the case of humans, there's obvious limits and approaches to scaling up. With LLMs, who knows where they will go in the next couple of years.
greatgib•3mo ago
It is, because a human would have used "thinking" to create this piece of code. There could be error and mistakes but at least you know there there is a human logic behind and you just have to check for things that can easy mistakes for a human.

With AI, in the current state at least, there is no global logic involved with the whole thing that was created. It is a random set of probabilities that generated a somehow valid code. But there is not a global "view" about it that it makes sense.

So when reviewing, you will basically have to do in your head the same mental process as would have done an original human contributor to check that the whole thing makes sense in the first place.

Worst than that, when reviewing such change, you should imagine that the AI probably generated a few invalid versions on the code and randomly iterated until there is something that passed the "linter" definition of valid code.

ofrzeta•3mo ago
Even that screenshot is bogus. When there's no understanding there can be no misunderstanding either. It's misleading to treat the LLM like there is understanding (and for the LLMs themselves to claim they do, although this anthropomorphization is part of their success). It's like asking the LLM "do you know about X?" It just makes no sense.
satisfice•3mo ago
In order to get the full benefit of AI we must apply it irresponsibly.

That’s what it boils down to.

stavros•3mo ago
Which has also always been the same with people.
satisfice•3mo ago
That’s what AI fanboys say every single time I make this point. But “it’s the same for humans” argument only works if you are referring to little children.

Indeed, my airline pilot brother once told me that a carefully supervised 7 year old could fly an airliner safely, as long as there was no in-flight emergency.

And indeed hiring children, who are not accountable for their behavior, does create a supervision problem that can easily exceed the value you may get, for many kinds of work.

I can’t trust AI the way I can trust qualified adults.

stavros•3mo ago
Well, you employ different adults than I do, then. Every person I know (including me) can be either thorough, or fast, as the post says, and there's no way to get both.
phinnaeus•3mo ago
At first I thought this was a typo, but actually I fully agree with this. If we use LLMs (in their current state) responsibly we won’t see much benefit, because the weight of that responsibility is roughly equivalent to the cost of doing the task without any assistance.