frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
1•ykdojo•21s ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
1•gmays•47s ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
1•dhruv3006•2m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
1•mariuz•2m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
1•RyanMu•6m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•9m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
2•rcarmo•10m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•11m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•11m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•12m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•14m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•14m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•17m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•17m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•18m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•18m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•18m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•19m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•19m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•20m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•21m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•27m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•28m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•28m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
40•bookofjoe•28m ago•13 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•29m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•30m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•31m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•31m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•31m ago•0 comments
Open in hackernews

Eric S. Raymond: why is there such a huge variance in results from using LLMs?

https://twitter.com/esrtweet/status/2016849708254179501
6•dist-epoch•1w ago

Comments

sara_builds•1w ago
The variance mostly comes down to prompt craft and context management.

People who get consistently good results have usually internalized a few things: (1) being explicit about constraints and output format, (2) providing relevant context without noise, (3) matching the model to the task (reasoning-heavy vs creative vs code), and (4) iterating on the prompt when something fails rather than assuming the model is broken.

I've seen the same person get wildly different results depending on whether they ask "write code to do X" vs "I need a function that takes A, returns B, handles edge case C, and should be optimized for D. Here's the existing code it needs to integrate with: [context]."

The gap between those two approaches can be a 10x difference in usefulness. Most of the "LLMs are useless" crowd and the "LLMs are magic" crowd are just working with very different prompt habits.

tjr•1w ago
It appears to me that the people who consistently get the best results from LLM coding tools are prompting fairly close to the code. Maybe not quite at the level of writing pseudocode, but close enough that they really still need to understand software development.

Which seems to not quite gel with the notion of, you don't need programmers, you don't need to know how to program, etc.

I feel pretty confident that, in fact, you don't need to. You probably can get good results without having a clue what you're doing, if you prompt well enough, or prompt long enough, or prompt repeatedly until it works. But I think you will more reliably, maybe even more quickly, get good results if you do know what you're doing, and if you stay reasonably engaged with the development, even if not literally writing the code yourself.

armchairhacker•1w ago
What are your exact prompts (including project context) and generated code?

And for those who are struggling with LLMs, what are their prompts and code?

FrankWilhoit•1w ago
He thinks they ought to converge. What does he think they ought to converge upon? How will he know that thing when he sees it? If he will know it when he sees it, why does he need help making it?

The answer to all of these, of course, is that convergence is not expected and correctness is not a priority. The use of an LLM is a boasting point, full stop. It is a performance. It is "look, Ma, no coders!". And it is only relevant, or possible, because although the LLM code is not right, the pre-LLM code wasn't right either. The right answer is not part of the bargain. The customer doesn't care whether the numbers are right. They care how the technology portfolio looks. Is it currently fashionable? Are the auditors happy with it? The auditors don't care whether the numbers are right: what they care about is whether their people have to go to any -- any -- trouble to access or interpret the numbers.