frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
1•doener•2m ago•0 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•7m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
1•jbegley•10m ago•0 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•13m ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•17m ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
1•PaulHoule•19m ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•20m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
1•tolerance•20m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•21m ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
2•linkdd•22m ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•26m ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•28m ago•0 comments

AI-powered text correction for macOS

https://taipo.app/
1•neuling•31m ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•32m ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
1•y1n0•34m ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
3•bundie•39m ago•1 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
3•gnabgib•40m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•44m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•y1n0•45m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
4•calebhwin•45m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•50m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•58m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•1h ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•1h ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
3•rolph•1h ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•1h ago•3 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•1h ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•1h ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•1h ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•1h ago•0 comments
Open in hackernews

Chauffeur Knowledge and the Impending AI Crack-Up

https://ryanglover.net/blog/chauffeur-knowledge-and-the-impending-ai-crack-up
15•rglover•8mo ago

Comments

Terr_•8mo ago
> Chauffeur Knowledge

Going into this piece, I expected an analogy where the user is like an out-of-touch wealthy person, who builds a shallow model of the world from what they hear from their LLM chauffeur or golf-caddy.

That is something I fear will spread, as people give too much trust to the assistant in their pocket, turning to it at the expense of the other sources of information.

> That's when it hit me: this is going to change everything; but not in the utopian "everything is magical" sense, but in the "oh, God, what have we done" sense.

I think of it like asbestos, or leaded-gasoline. Incredibly useful in the right situation, but used so broadly that we regret it later. (Or at least, the people who didn't make their fortunes selling it.)

andy99•8mo ago
This makes me think of eternal September which I'd say the author would argue we've reached with respect to coding.
rglover•8mo ago
I wasn't thinking about that when I wrote this but that's an accurate take.
satisfice•8mo ago
The repeated use of the phrase “it works” is unhelpful. What the author means is “it appears to work.”

There is a vast difference between actually working and looking superficially like it works.

This is a massive testing problem.

rglover•8mo ago
That's the thing, though, it does work. Does it need a fair amount of hand holding to get there for non-trivial tasks? Yes. But if you have a basic skill set and some patience, you can get impressive results with a shocking number of common problems.

You're right that the "working" is superficial in a one-shot sense. But if you spend the time to nudge it in the right direction, you can accomplish a lot.

satisfice•8mo ago
As a software tester who takes pride in his vocation, I need to use words more carefully. I hope you will, too.

You are saying "it does work" for something that often seems to work, and for which you almost never carefully check to see that it is working (because that would be expensive). "Does" implies that it almost always actually works. That's not been my experience, nor, as I look around, does it seem to be anyone else's.

rglover•8mo ago
> you almost never carefully check to see that it is working (because that would be expensive)

I don't know about you, but I test everything thoroughly, whether I wrote it myself or used an LLM.

I think you're nitpicking over language here when what I said is clear. It can and does work with the proper amount of attention and effort, but that doesn't mean that it will just magically work with a one-shot attempt (though simpler code certainly can—e.g., "write me a debounce function").

As a software tester, you seem to be personalizing what I'm saying as encouraging developers to be a burden to you. Instead, I'm looking at it purely from a productivity standpoint. If your own team is just willy nilly dropping code that maybe works on your lap (with limited to no testing of their own), it might be time to find a new team. Being a tester in that type of environment will always be stressful, irrespective of how the code is written.

anon373839•8mo ago
I think this author makes the mistake, as many people do when they project AI trends forward, of ignoring the feedback mechanisms.

> Third, and I think scariest: it means that programming (and the craft of software) will cease to evolve. We'll stop asking "is there a better way to do this" and transition to "eh, it works." Instead of software getting better over time, at best, it will stagnate indefinitely.

“Eh, it works” isn’t good enough in a competitive situation. Customers will notice if software has weird bugs, is slow, clunky to use, etc. Some bugs can result in legal liability.

When this happens, other firms will be happy to build a better mousetrap to earn the business, and that’s an incentive against stagnation.

Of course, the FAANG type companies aren’t very competitive. But their scale necessitates serious engineering efforts: a bad fuck-up can be really bad, depending on what it is.

rglover•8mo ago
> Customers will notice if software has weird bugs, is slow, clunky to use, etc. Some bugs can result in legal liability.

I'd like to believe this, but some ~30 years into the popular internet, we still have bug-riddled websites and struggle to build simple software that's both usable (UX) and stable. You're right that if a company offers an SLA they're liable, but there's a wide range of software out there that isn't bound by an SLA.

That means that as this thing unfurls, we either get a lot of broken/low quality stuff, or even more consolidation into a few big player's hands.

> When this happens, other firms will be happy to build a better mousetrap to earn the business, and that’s an incentive against stagnation.

I agree this is likely, but the how is important. The hypothetical cost reduction of doing away with competent staff in favor of AI-augmented devs (or just agents) is too juicy for it to not become the norm (at least in the short-term). We're already seeing major players enforce AI-driven development (and in some cases, like Salesforce, impose hiring freezes under the assumption that AI is enough for most tasks).

The optimist in me agrees with what you're saying, but my gut take is that there will be a whole boatload of irrational, careless behavior before we see any meaningful correction.