frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Neomacs: Rewriting the Emacs display engine in Rust with GPU rendering via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•1m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•5m ago•0 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
1•m00dy•6m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•7m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•14m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•17m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•17m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•18m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•19m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•19m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•20m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•20m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•24m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•25m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•26m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•26m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•34m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•34m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•37m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•37m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•37m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
5•pseudolus•37m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•37m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•39m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•39m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•39m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•41m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•41m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•44m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•44m ago•1 comments
Open in hackernews

Eric S. Raymond: why is there such a huge variance in results from using LLMs?

https://twitter.com/esrtweet/status/2016849708254179501
6•dist-epoch•1w ago

Comments

sara_builds•1w ago
The variance mostly comes down to prompt craft and context management.

People who get consistently good results have usually internalized a few things: (1) being explicit about constraints and output format, (2) providing relevant context without noise, (3) matching the model to the task (reasoning-heavy vs creative vs code), and (4) iterating on the prompt when something fails rather than assuming the model is broken.

I've seen the same person get wildly different results depending on whether they ask "write code to do X" vs "I need a function that takes A, returns B, handles edge case C, and should be optimized for D. Here's the existing code it needs to integrate with: [context]."

The gap between those two approaches can be a 10x difference in usefulness. Most of the "LLMs are useless" crowd and the "LLMs are magic" crowd are just working with very different prompt habits.

tjr•1w ago
It appears to me that the people who consistently get the best results from LLM coding tools are prompting fairly close to the code. Maybe not quite at the level of writing pseudocode, but close enough that they really still need to understand software development.

Which seems to not quite gel with the notion of, you don't need programmers, you don't need to know how to program, etc.

I feel pretty confident that, in fact, you don't need to. You probably can get good results without having a clue what you're doing, if you prompt well enough, or prompt long enough, or prompt repeatedly until it works. But I think you will more reliably, maybe even more quickly, get good results if you do know what you're doing, and if you stay reasonably engaged with the development, even if not literally writing the code yourself.

armchairhacker•1w ago
What are your exact prompts (including project context) and generated code?

And for those who are struggling with LLMs, what are their prompts and code?

FrankWilhoit•1w ago
He thinks they ought to converge. What does he think they ought to converge upon? How will he know that thing when he sees it? If he will know it when he sees it, why does he need help making it?

The answer to all of these, of course, is that convergence is not expected and correctness is not a priority. The use of an LLM is a boasting point, full stop. It is a performance. It is "look, Ma, no coders!". And it is only relevant, or possible, because although the LLM code is not right, the pre-LLM code wasn't right either. The right answer is not part of the bargain. The customer doesn't care whether the numbers are right. They care how the technology portfolio looks. Is it currently fashionable? Are the auditors happy with it? The auditors don't care whether the numbers are right: what they care about is whether their people have to go to any -- any -- trouble to access or interpret the numbers.