frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Seedance 2.0 Release

https://seedancy2.com/
1•funnycoding•32s ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•35s ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•53s ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•1m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•2m ago•0 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•3m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•6m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•7m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•7m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•8m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•9m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•10m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•11m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•11m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•11m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
2•simonw•12m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•12m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•13m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•14m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•15m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•21m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•21m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•22m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
2•tusslewake•24m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•25m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•25m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•25m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
8•samasblack•27m ago•4 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•28m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•29m ago•0 comments
Open in hackernews

The Human in the Loop

https://adventures.nodeland.dev/archive/the-human-in-the-loop/
47•artur-gawlik•2w ago

Comments

chrisjj•2w ago
> When I fix a security vulnerability, I'm not just checking if the tests pass. I'm asking: does this actually close the attack vector?

If you have to ask, then you'd be better putting that effort into fixing the test coverage.

mpalmer•2w ago
Why would I want to take advice about keeping humans in the loop from someone who let an LLM write 90% of their blog post?
actionfromafar•2w ago
The human pressed the red button. :)
yohguy•2w ago
I don't like reading AI text because I feel each word matters a lot less, however the message the author is conveying can be preserved. I read an article like this for the quality of the message not the craftsmen of the medium.
mpalmer•2w ago
If the author didn't have the good taste and decency to edit the painfully obvious generated text, I just assume the message is low quality.
AstroBen•2w ago
This is the new world we live in. Writers use AI to balloon a 2 paragraph thought into a full article, readers then use AI to compress the article into something akin to a 2 paragraph easily digestible piece. Everyone much happy. Example:

Key points from The Human in the Loop..

- The author pushes back on the idea that AI has made software developers obsolete, arguing instead that it has shifted where human effort matters.

- AI is increasingly good at producing code quickly, but that doesn’t remove the need for human oversight—especially for correctness, security, edge cases, and architectural fit.

- The “human in the loop” is not a temporary bottleneck but the accountable party who must understand, review, and take responsibility for what ships.

- Senior engineers’ most valuable skill has always been judgment, not typing speed—and AI makes that judgment even more critical.

- The author warns against blaming AI for bugs or bad outcomes; responsibility still lies with the human who approved the result.

- Software practices, team structures, and workflows need to evolve to emphasize review, verification, and intent over raw code production.

scandox•2w ago
On what basis did you make this judgement? I found the article to be reasonable and not excessively padded.
insin•2w ago
But here's the thing. The LLM house writing style isn't just annoying, it's become unreadable through repeated exposure. This really gets to the heart of why human minds are starting to slide off it.
ericyd•2w ago
Not trying to be rude but your very short reply is hard to understand. "Unreadable", "starting to slide off", I honestly don't know what you're saying here.
blenderob•2w ago
Pretty sure they are mocking LLM outputs by making their own comment look like as if it came from LLM. It's sarcasm.
MrJohz•2w ago
Other people might point to more specific tells, but instead I'll reference https://zanlib.dev/blog/reliable-signals-of-honest-intent/, which says that you can tell mainly because of the subconscious uncanny valley effect, and then you start noticing the tells afterwards.

Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).

yohguy•2w ago
There will always be a human in the loop, at what level is the question. It was a very short while ago, in the last couple of months in my case where it went from having to to go at a function level to what the posts describe (still not to the level the Death of SWE article is). It is hard for me to imagine that LLMs can go 1 level higher anytime soon. Progress is not guaranteed. Regardless on whether it improves or not I think it is best to assume that it won't and build using that assumption. The shortcomings of the current (NEW) system and their failings are what end up creating the new patterns for work and the industry. I think that is the more interesting conversation, not how quickly can we ship code but what that means for organizations what skills become the most valuable and what actually rises to the top.
kilroy123•2w ago
> LLMs can go 1 level higher anytime soon. Progress is not guaranteed.

I tend to agree, but I do think we'll get there in the next 5-10 years.

movedx01•2w ago
AI derived piece arguing with another AI derived piece about AI. It's slop all the way down.
kardianos•2w ago
> My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse.

I try to review 100% of my dependencies. My criticism of the npm ecosystem is they say "I didn't review it, someone else wrote it" and everyone thinks that is an acceptable excuse.

scroot•2w ago
These posts claiming that "we will review the output" etc., and that claim software engineers will still need to apply their expertise and wisdom to generated outputs, never seem to think this all the way through. Those who write such articles might indeed have enough experience and deep knowledge to evaluate AI outputs. What of subsequent generations of engineers? What about the forthcoming wave of people who may never attain the (required) deep knowledge, because they've been dependent on these generation tools during the course of their own education?

The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue.

echelon•2w ago
The invention of calculators did not cause society to collapse.

Smart and industrious people will focus energy on economically important problems. That has always been the case.

Everything will work out just fine.

id•2w ago
>software engineers will still need to apply their expertise and wisdom to generated outputs

And in my experience they don't really do that. They trust that it'll be good enough.

candiddevmike•2w ago
This is why you aren't seeing GenAI used more in law firms. Lawyers can be disbarred by erroneous hallucinations, so they're all extremely cautious about using them. Imagine if there was that kind of accountability in our profession.
8organicbits•2w ago
Another thing I keep thinking about is that review is harder than writing code. A casual LGTM is suitable for peer review, but applying deep context and checking for logic issues requires more thought. When I write code, I usually learn something about software or the context. "Writing is thinking" in a way that reading isn't.
dfxm12•2w ago
I don't understand how this is a new or unique problem. Regardless of when or where (or if!) my coworkers got their degrees, before or after access to AI tools, some of them are intellectually curious. Some do their job well. Some are in over their head & are improving. Some are probably better suited for other lines of work. It's always been an organizational function to identify & retain folks who are willing and able to grow into the experience and knowledge required for the role they currently have and future roles where they may be needed.

Academically, this is a non factor as well. You still learned your multiplication tables even though calculators existed, right?

entropicdrifter•2w ago
Agreed. This is a moral panic because people are learning and adapting in new ways.

Aristotle blamed literacy for intellectual laziness among the youth compared to the old methods of memorization

mpalmer•2w ago
The solution is to find a way to use these tools in such a way that saves us huge amounts of time but still forces us to think and document our decisions. Then, teach these methods in school.

Self-directed, individual use of LLMs for generating code is not the way forward for industrial software production.

entropicdrifter•2w ago
Personally, I'm not as worried about this as an issue going forward.

When you look at technical people who grew up with the imperfect user interfaces/computers of the 80s, 90s and 00s before the rise of smartphones and tablets, you see people who have a naturally acquired knack for troubleshooting and organically gaining understanding of computers despite (in most cases) never being grounded in the low-level mathematical underpinnings of computer science.

IMO, the imperfections of modern AI are likely going to lead to a new generation of troubleshooters who will organically be forced to accumulate real understanding from a top-down perspective in much the same vein. It's just going to cost us all an absurd amount of electricity.

andai•2w ago
> who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM?

"A computer cannot be held accountable. Therefore a computer must never make a business decision." —IBM document from 1970s

Nevermark•2w ago
Unless not making a decision would, "through inaction, allow a human being to come to harm". — Asimov, "Runaround", 1942.

The slope between insignificant and significant actions is so enormously long and shallow, it isn't going to impede machine decision making unless some widely accepted red line is defined and institutionalized. Quickly.

If we can't agree that super-scaled predatory business models (unpermissioned or dark permissioned surveillance, corporate sharing or selling of our information, algorithmically feed/ad manipulation based on such surveillance or other conflicts of interest, knowledge appropriation without permission or compensation, predatory financial practices, ... etc.) are not acceptable, and apply oversight with practical means for making violations reliably risk-adjusted deeply unprofitable or criminally prosecuted, the decision making of machines isn't going to be impeded even when it is obviously causing great but not-yet-illegal harm.

After all, the umbrella problem is scalable harm with unchecked incentives. Ethics and accountability overall, not machines in particular.

Scaling of harm (even if the negative externalities from individual incidents seem small), has to be the redline. I.e. unethical behavior.

As a community, I think most of us are aware that the big automated bureaucracies that make up tech giant aggregators' "customer service" are already making life changing decisions, too often capriciously, and often with little recourse for those unfairly harmed.

I have personally been inflicted by that problem.

We are going to need both effective brakes, and reverse gear, to prevent this being an uncontrolled descent.

(Not being cynical. But if something is to be done, we need to address the actual scale and state of the problem. There isn't time left in human history for more slow incremental wack-a-mole efforts, or unrewarded attempts at corporate shaming. Those have failed us.)

In the hyper-scaled world, ethics mean nothing if not backed up by economics.

piker•2w ago
> Mike asks: "If an idiot like me can clone a [Bloomberg terminal] that costs $30k per month in two hours, what even is software development?"

So that’s the baseline intellectual rigor we’re dealing with here.

TZubiri•2w ago
What is the bloomberg terminal thing? Did someone vibecode a competitor?