frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
142•theblazehen•2d ago•42 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
668•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•33 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
16•kaonwarb•3d ago•19 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
28•jesperordrup•4h ago•16 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
223•dmpetrov•14h ago•117 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•5 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•138 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
183•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

The Human in the Loop

https://adventures.nodeland.dev/archive/the-human-in-the-loop/
47•artur-gawlik•2w ago

Comments

chrisjj•2w ago
> When I fix a security vulnerability, I'm not just checking if the tests pass. I'm asking: does this actually close the attack vector?

If you have to ask, then you'd be better putting that effort into fixing the test coverage.

mpalmer•2w ago
Why would I want to take advice about keeping humans in the loop from someone who let an LLM write 90% of their blog post?
actionfromafar•2w ago
The human pressed the red button. :)
yohguy•2w ago
I don't like reading AI text because I feel each word matters a lot less, however the message the author is conveying can be preserved. I read an article like this for the quality of the message not the craftsmen of the medium.
mpalmer•2w ago
If the author didn't have the good taste and decency to edit the painfully obvious generated text, I just assume the message is low quality.
AstroBen•2w ago
This is the new world we live in. Writers use AI to balloon a 2 paragraph thought into a full article, readers then use AI to compress the article into something akin to a 2 paragraph easily digestible piece. Everyone much happy. Example:

Key points from The Human in the Loop..

- The author pushes back on the idea that AI has made software developers obsolete, arguing instead that it has shifted where human effort matters.

- AI is increasingly good at producing code quickly, but that doesn’t remove the need for human oversight—especially for correctness, security, edge cases, and architectural fit.

- The “human in the loop” is not a temporary bottleneck but the accountable party who must understand, review, and take responsibility for what ships.

- Senior engineers’ most valuable skill has always been judgment, not typing speed—and AI makes that judgment even more critical.

- The author warns against blaming AI for bugs or bad outcomes; responsibility still lies with the human who approved the result.

- Software practices, team structures, and workflows need to evolve to emphasize review, verification, and intent over raw code production.

scandox•2w ago
On what basis did you make this judgement? I found the article to be reasonable and not excessively padded.
insin•2w ago
But here's the thing. The LLM house writing style isn't just annoying, it's become unreadable through repeated exposure. This really gets to the heart of why human minds are starting to slide off it.
ericyd•2w ago
Not trying to be rude but your very short reply is hard to understand. "Unreadable", "starting to slide off", I honestly don't know what you're saying here.
blenderob•2w ago
Pretty sure they are mocking LLM outputs by making their own comment look like as if it came from LLM. It's sarcasm.
MrJohz•2w ago
Other people might point to more specific tells, but instead I'll reference https://zanlib.dev/blog/reliable-signals-of-honest-intent/, which says that you can tell mainly because of the subconscious uncanny valley effect, and then you start noticing the tells afterwards.

Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).

yohguy•2w ago
There will always be a human in the loop, at what level is the question. It was a very short while ago, in the last couple of months in my case where it went from having to to go at a function level to what the posts describe (still not to the level the Death of SWE article is). It is hard for me to imagine that LLMs can go 1 level higher anytime soon. Progress is not guaranteed. Regardless on whether it improves or not I think it is best to assume that it won't and build using that assumption. The shortcomings of the current (NEW) system and their failings are what end up creating the new patterns for work and the industry. I think that is the more interesting conversation, not how quickly can we ship code but what that means for organizations what skills become the most valuable and what actually rises to the top.
kilroy123•2w ago
> LLMs can go 1 level higher anytime soon. Progress is not guaranteed.

I tend to agree, but I do think we'll get there in the next 5-10 years.

movedx01•2w ago
AI derived piece arguing with another AI derived piece about AI. It's slop all the way down.
kardianos•2w ago
> My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse.

I try to review 100% of my dependencies. My criticism of the npm ecosystem is they say "I didn't review it, someone else wrote it" and everyone thinks that is an acceptable excuse.

scroot•2w ago
These posts claiming that "we will review the output" etc., and that claim software engineers will still need to apply their expertise and wisdom to generated outputs, never seem to think this all the way through. Those who write such articles might indeed have enough experience and deep knowledge to evaluate AI outputs. What of subsequent generations of engineers? What about the forthcoming wave of people who may never attain the (required) deep knowledge, because they've been dependent on these generation tools during the course of their own education?

The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue.

echelon•2w ago
The invention of calculators did not cause society to collapse.

Smart and industrious people will focus energy on economically important problems. That has always been the case.

Everything will work out just fine.

id•2w ago
>software engineers will still need to apply their expertise and wisdom to generated outputs

And in my experience they don't really do that. They trust that it'll be good enough.

candiddevmike•2w ago
This is why you aren't seeing GenAI used more in law firms. Lawyers can be disbarred by erroneous hallucinations, so they're all extremely cautious about using them. Imagine if there was that kind of accountability in our profession.
8organicbits•2w ago
Another thing I keep thinking about is that review is harder than writing code. A casual LGTM is suitable for peer review, but applying deep context and checking for logic issues requires more thought. When I write code, I usually learn something about software or the context. "Writing is thinking" in a way that reading isn't.
dfxm12•2w ago
I don't understand how this is a new or unique problem. Regardless of when or where (or if!) my coworkers got their degrees, before or after access to AI tools, some of them are intellectually curious. Some do their job well. Some are in over their head & are improving. Some are probably better suited for other lines of work. It's always been an organizational function to identify & retain folks who are willing and able to grow into the experience and knowledge required for the role they currently have and future roles where they may be needed.

Academically, this is a non factor as well. You still learned your multiplication tables even though calculators existed, right?

entropicdrifter•2w ago
Agreed. This is a moral panic because people are learning and adapting in new ways.

Aristotle blamed literacy for intellectual laziness among the youth compared to the old methods of memorization

mpalmer•2w ago
The solution is to find a way to use these tools in such a way that saves us huge amounts of time but still forces us to think and document our decisions. Then, teach these methods in school.

Self-directed, individual use of LLMs for generating code is not the way forward for industrial software production.

entropicdrifter•2w ago
Personally, I'm not as worried about this as an issue going forward.

When you look at technical people who grew up with the imperfect user interfaces/computers of the 80s, 90s and 00s before the rise of smartphones and tablets, you see people who have a naturally acquired knack for troubleshooting and organically gaining understanding of computers despite (in most cases) never being grounded in the low-level mathematical underpinnings of computer science.

IMO, the imperfections of modern AI are likely going to lead to a new generation of troubleshooters who will organically be forced to accumulate real understanding from a top-down perspective in much the same vein. It's just going to cost us all an absurd amount of electricity.

andai•2w ago
> who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM?

"A computer cannot be held accountable. Therefore a computer must never make a business decision." —IBM document from 1970s

Nevermark•2w ago
Unless not making a decision would, "through inaction, allow a human being to come to harm". — Asimov, "Runaround", 1942.

The slope between insignificant and significant actions is so enormously long and shallow, it isn't going to impede machine decision making unless some widely accepted red line is defined and institutionalized. Quickly.

If we can't agree that super-scaled predatory business models (unpermissioned or dark permissioned surveillance, corporate sharing or selling of our information, algorithmically feed/ad manipulation based on such surveillance or other conflicts of interest, knowledge appropriation without permission or compensation, predatory financial practices, ... etc.) are not acceptable, and apply oversight with practical means for making violations reliably risk-adjusted deeply unprofitable or criminally prosecuted, the decision making of machines isn't going to be impeded even when it is obviously causing great but not-yet-illegal harm.

After all, the umbrella problem is scalable harm with unchecked incentives. Ethics and accountability overall, not machines in particular.

Scaling of harm (even if the negative externalities from individual incidents seem small), has to be the redline. I.e. unethical behavior.

As a community, I think most of us are aware that the big automated bureaucracies that make up tech giant aggregators' "customer service" are already making life changing decisions, too often capriciously, and often with little recourse for those unfairly harmed.

I have personally been inflicted by that problem.

We are going to need both effective brakes, and reverse gear, to prevent this being an uncontrolled descent.

(Not being cynical. But if something is to be done, we need to address the actual scale and state of the problem. There isn't time left in human history for more slow incremental wack-a-mole efforts, or unrewarded attempts at corporate shaming. Those have failed us.)

In the hyper-scaled world, ethics mean nothing if not backed up by economics.

piker•2w ago
> Mike asks: "If an idiot like me can clone a [Bloomberg terminal] that costs $30k per month in two hours, what even is software development?"

So that’s the baseline intellectual rigor we’re dealing with here.

TZubiri•2w ago
What is the bloomberg terminal thing? Did someone vibecode a competitor?