frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•1m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•1m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•2m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•3m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•4m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•4m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
1•pseudolus•4m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•9m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
1•bkls•9m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•10m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
3•roknovosel•10m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•18m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•19m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•21m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•21m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•21m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
2•pseudolus•22m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•22m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•23m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•23m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•24m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•25m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•25m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•28m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•29m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•29m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•30m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•31m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•31m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•32m ago•0 comments
Open in hackernews

Caveat Prompter

https://surfingcomplexity.blog/2025/10/12/caveat-promptor/
12•azhenley•3mo ago

Comments

fluxusars•3mo ago
This is no different from reviewing code from actual humans: someone could have written great looking code with excellent test coverage and still have missed a crucial edge case or obvious requirement. In the case of humans, there's obvious limits and approaches to scaling up. With LLMs, who knows where they will go in the next couple of years.
greatgib•3mo ago
It is, because a human would have used "thinking" to create this piece of code. There could be error and mistakes but at least you know there there is a human logic behind and you just have to check for things that can easy mistakes for a human.

With AI, in the current state at least, there is no global logic involved with the whole thing that was created. It is a random set of probabilities that generated a somehow valid code. But there is not a global "view" about it that it makes sense.

So when reviewing, you will basically have to do in your head the same mental process as would have done an original human contributor to check that the whole thing makes sense in the first place.

Worst than that, when reviewing such change, you should imagine that the AI probably generated a few invalid versions on the code and randomly iterated until there is something that passed the "linter" definition of valid code.

ofrzeta•3mo ago
Even that screenshot is bogus. When there's no understanding there can be no misunderstanding either. It's misleading to treat the LLM like there is understanding (and for the LLMs themselves to claim they do, although this anthropomorphization is part of their success). It's like asking the LLM "do you know about X?" It just makes no sense.
satisfice•3mo ago
In order to get the full benefit of AI we must apply it irresponsibly.

That’s what it boils down to.

stavros•3mo ago
Which has also always been the same with people.
satisfice•3mo ago
That’s what AI fanboys say every single time I make this point. But “it’s the same for humans” argument only works if you are referring to little children.

Indeed, my airline pilot brother once told me that a carefully supervised 7 year old could fly an airliner safely, as long as there was no in-flight emergency.

And indeed hiring children, who are not accountable for their behavior, does create a supervision problem that can easily exceed the value you may get, for many kinds of work.

I can’t trust AI the way I can trust qualified adults.

stavros•3mo ago
Well, you employ different adults than I do, then. Every person I know (including me) can be either thorough, or fast, as the post says, and there's no way to get both.
phinnaeus•3mo ago
At first I thought this was a typo, but actually I fully agree with this. If we use LLMs (in their current state) responsibly we won’t see much benefit, because the weight of that responsibility is roughly equivalent to the cost of doing the task without any assistance.