frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I'm reluctant to verify my identity or age for any online services

https://neilzone.co.uk/2026/03/im-struggling-to-think-of-any-online-services-for-which-id-be-will...
140•speckx•1h ago•47 comments

India's top court angry after junior judge cites fake AI-generated orders

https://www.bbc.com/news/articles/c178zzw780xo
212•tchalla•3h ago•95 comments

Apple introduces the new MacBook Air with M5

https://www.apple.com/newsroom/2026/03/apple-introduces-the-new-macbook-air-with-m5/
73•Garbage•1h ago•35 comments

The Xkcd thing, now interactive

https://editor.p5js.org/isohedral/full/vJa5RiZWs
583•memalign•4h ago•77 comments

Apple Introduces MacBook Pro with All‑New M5 Pro and M5 Max

https://www.apple.com/newsroom/2026/03/apple-introduces-macbook-pro-with-all-new-m5-pro-and-m5-max/
219•scrlk•1h ago•240 comments

Don't Become an Engineering Manager

https://newsletter.manager.dev/p/dont-become-an-engineering-manager
36•flail•1h ago•22 comments

Meta’s AI smart glasses and data privacy concerns

https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-privacy-concerns-workers-say-we-see-e...
1253•sandbach•17h ago•718 comments

Launch HN: Cekura (YC F24) – Testing and monitoring for voice and chat AI agents

12•atarus•1h ago•1 comments

I'm losing the SEO battle for my own open source project

https://twitter.com/Gavriel_Cohen/status/2028821432759717930
155•devinitely•1h ago•80 comments

British Columbia is permanently adopting daylight time

https://www.cbc.ca/news/canada/british-columbia/b-c-adopting-year-round-daylight-time-9.7111657
989•ireflect•19h ago•481 comments

Arm's Cortex X925: Reaching Desktop Performance

https://chipsandcheese.com/p/arms-cortex-x925-reaching-desktop
178•ingve•7h ago•90 comments

Claude's Cycles: Claude Opus 4.6 solves a problem posed by Don Knuth [pdf]

https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf
77•fs123•4h ago•21 comments

The Internet's Top Tech Publications Lost 58% of Their Google Traffic Since 2024

https://growtika.com/blog/tech-media-collapse
71•Growtika•1h ago•41 comments

Ars Technica fires reporter after AI controversy involving fabricated quotes

https://futurism.com/artificial-intelligence/ars-technica-fires-reporter-ai-quotes
465•danso•14h ago•287 comments

Computer Says No

https://koenvangilst.nl/lab/computer-says-no
43•vnglst•2d ago•18 comments

Show HN: React-Kino – Cinematic scroll storytelling for React (1KB core)

https://github.com/btahir/react-kino
7•bilater•2d ago•0 comments

History of the Graphical User Interface: The Rise (and Fall?) Of WIMP Design

https://www.uxtigers.com/post/gui-history
15•todsacerdoti•3d ago•9 comments

AI-generated art can't be copyrighted (Supreme Court declines review)

https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright
56•duggan•1h ago•31 comments

Apple unveils new Studio Display and all-new Studio Display XDR

https://www.apple.com/newsroom/2026/03/apple-unveils-new-studio-display-and-all-new-studio-displa...
77•victorbjorklund•1h ago•67 comments

We Built a Video Rendering Engine by Lying to the Browser About What Time It Is

https://blog.replit.com/browsers-dont-want-to-be-cameras
102•darshkpatel•2d ago•46 comments

Simple screw counter

https://mitxela.com/projects/screwcounter
210•jk_tech•2d ago•58 comments

Mullvad VPN: Banned TV Ad in the Streets of London [video]

https://www.youtube.com/watch?v=rwhznrpgl7k
173•vanyauhalin•3h ago•91 comments

Show HN: I built a sub-500ms latency voice agent from scratch

https://www.ntik.me/posts/voice-agent
495•nicktikhonov•18h ago•145 comments

C64: Putting Sprite Multiplexing to Work

https://bumbershootsoft.wordpress.com/2026/02/28/c64-putting-sprite-multiplexing-to-work/
35•ibobev•1d ago•1 comments

I built a pint-sized Macintosh

https://www.jeffgeerling.com/blog/2026/pint-sized-macintosh-pico-micro-mac/
71•ingve•8h ago•18 comments

DOS Memory Management

https://www.os2museum.com/wp/dos-memory-management/
86•ingve•2d ago•24 comments

How to sew a Hyperbolic Blanket (2021)

https://www.geometrygames.org/HyperbolicBlanket/index.html
35•aebtebeten•3d ago•2 comments

Physicists developing a quantum computer that’s entirely open source

https://physics.aps.org/articles/v19/24
165•tzury•16h ago•31 comments

First in-utero stem cell therapy for fetal spina bifida repair is safe: study

https://health.ucdavis.edu/news/headlines/first-ever-in-utero-stem-cell-therapy-for-fetal-spina-b...
333•gmays•1d ago•64 comments

New iPad Air, powered by M4

https://www.apple.com/newsroom/2026/03/apple-introduces-the-new-ipad-air-powered-by-m4/
427•Garbage•1d ago•661 comments
Open in hackernews

We Automated Everything Except Knowing What's Going On

https://eversole.dev/blog/we-automated-everything/
52•kennethops•2h ago

Comments

afry1•1h ago
"The future belongs to whoever understands what they just shipped."

Perfect summary.

It's like we invented a world where you can finally, _finally_ speedrun an enormous legacy codebase and all patted ourselves on the back like that was a good thing.

2OEH8eoCRo0•1h ago
I like that. These AIs are legacy codebase generators. Nobody knows how it works and everyone is afraid to touch it.
silverquiet•1h ago
Anything in production is legacy; I'm pretty sure it happens as soon as the code is shipped regardless of who wrote it.
alanbernstein•1h ago
True, but I think there's another dimension implied: how many devs are left that understand the code? Being able to start at zero is a fascinating surprise (compared to five years ago).
kennethops•1h ago
a comment I cannot stop thinking about is "we need to start thinking about production as throw away" Which is a wild thought to think about when I think about on my career. We had so many dbs or servers that we couldn't touch because they were a special snowflake
allenu•1h ago
Yup. AI can't automate long-term responsibility and ownership of a product. It can produce output quicker but somebody still has to be responsible to the customer using said product. The hard limit is still the willingness of the human producing the code to back what's been output.
iammjm•1h ago
We are speedrunning legacy "codebases" all the time. Or do you conjure up your own pickaxe, mine your own minerals, produce your own electricity, and construct your own computers and networks first before you go off to develop an application? Would you even know how to do those things? That is all enormous legacy codebase that we speedrun all the time. Just add one more to it.
scared_together•42m ago
That’s all legacy but none of it is speedrunning.

If we could conjure pickaxes and electric power plants in a single day, that would be speedrunning.

afry1•22m ago
I sure don't.

But when I'm using all of those things (pickaxe, mineral mine, power station, internet network hub), I know that there was a thinking human being that took some measure of human care and consideration when creating them. And that there are people on the other side of the economic transaction to talk to or hold accountable when something goes wrong.

climike•1h ago
In a similar fashion it appears that article was automated - did the author read every word in their own article?
kennethops•1h ago
I tried to write it more stream consciousness
frisia•1h ago
Actually unreadable
kennethops•1h ago
Im not a writer. Just a guy with an opinion
Flashtoo•1h ago
Then just post your opinions rather than the text the LLM dreamed around your opinions. Short posts and tweets tend to be well-liked on HN, there is no need to puff it up to a big blog post.
frisia•1h ago
Look, I'm sympathetic to not feeling like you're a good writer, but there are plenty of writing styles which doesn't turn your opinions into overly dramatic AI slop. And now I don't even know which opinions are your own and which are from a GPT, hence my "unreadable" comment even if it sounds harsch. But it literally is impossible to infer what your opinions actually are when they have been butchered this hard in slop.
digital-cygnet•1h ago
I like the thought behind the piece, but what I think the criticisms are reacting to is the profusion of short, bursty sentences (just like the ones in the parent post), which can be great when used sparingly, but start to feel repetitive and have a "LinkedIn"-ish vibe, at least to me. For example the very end:

  Most of you won't be able to answer that. And you already know it.
  
  That's the conversation this industry needs to have. Not tomorrow. Now.
I hope you don't take this the wrong way and do continue writing - I enjoyed this piece, just wanted to give some constructive feedback
kennethops•1h ago
I have been doing most of my writing over there...honestly I hate it. So thank you for the feedback.
allenu•1h ago
It would be so much easier just to see the three or four bullet points given to the LLM than to read it.
bena•1h ago
I keep seeing the canard that "Anyone with an idea and access to an AI agent can ship a product. What used to take a team of twenty and six months now takes one person and a weekend. That's not hype. It's happening right now, everywhere, all at once."

But I don't see it. Where is this glut of software?

kennethops•1h ago
I am mainly seeing across a lot of my engineer friends and mentor who I respect deeply. They are using swarms of agents to build crms, small business and run their homelabs.
caseyohara•1h ago
Where are these small businesses and startups? The software economy should be booming, right? I’m not seeing it.
raesene9•1h ago
There's a massive difference between launching a piece of software and launching a successful business.

Over the last couple of months I've seen a load of new "product launches" in my niche but when you look at them they're largely vibecoded and don't show deep understanding and sustainability, so it's pretty likely you'll never see them as successful businesses.

Looking at some of the related places like /r/sideproject/ there's a lot of releases and I'd be willing to suggest that most of them are using LLMs

caseyohara•55m ago
Then, respectfully, what is the point? Does the trillions-of-dollars AI industry exist to support a few hobbyists building niche products to scratch their own itch? I thought the promise here is increased productivity, presumably in the economic sense.

There seems to be a lot of hype, and has been for years, but I’m not seeing it materialize as actual economic output. Surely by now there should be lots of businesses springing up to capture all of this value created by vibecoded software.

raesene9•8m ago
Whilst I have no special knowledge, my expectation is it'll do both. If you reduce the barriers to coding you'll get more code, both at the hobbyist/one-person level and also at the large corp level.

Whether that translates into more value for those larger corps is the trillion dollar question :) Writing code is a small part of the process of finding and shipping features that customers want, so it remains to be seen how much LLM tools translate it.

I think it's fairly widely accepted that from a financial standpoint we're in an AI/LLM bubble. There has been more investment than we're likely to see financial benefits, but it's impossible to predict to what degree (if you can predict that and the timing you can make a lot of money!!)

ryanmcl•1h ago
I'm one of those people. Taught myself Rails 8 months ago at 45 with zero coding experience. I now have a production app with real Stripe payments, AI voice cloning, background audio processing, and a PWA with push notifications. Live users, live transactions.

Before AI assistance this simply wasn't in my possibility space. Not because I couldn't think through the problems, but because the gap between "I know what I want to build" and "I can actually build it" required years of skill acquisition I didn't have while raising my son and being a good husband (after some rough years ay).

The glut isn't visible yet because most of us aren't shipping to HN. We're shipping to our tiny audiences, our friends and family, our niche communities. The software exists ...it's just not venture-scale so nobody's writing about it.

matltc•1h ago
It is not true. Speaking from experience: family member tried this and they couldn't get past a landing page.

I suppose if one is simultaneously ignorant when it comes to software and an expert at agentic workflow, then yeah sure, maybe--at the cost of how many tokens, though? But logically it seems that the former would preclude the latter.

Also, the "get it done in a weekend" seems to be a gross exaggeration.

cowlby•1h ago
Ever since Opus 4.6 came out, I've "vibecoded" a bunch of personal apps/CLIs that would've taken me months before. Some examples:

- CLI voice changer with cloned Overwatch voices on ElevenLabs.

- Brother P-Touch label maker using HTML/CSS. Their app is absolutely atrocious.

- Converted a FileMaker CRM into a Next.js/Supabase app.

- Dozens of drag-n-drop or 1-click/CLI tools. Think flattening a folder, a zip file.

- Dozens of Chrome Extensions and TamperMonkey user scripts. Think blocking ads with very targeted xpath.

But when I think about sharing them it feels like what's the point since anyone can make them themselves?

Rygian•1h ago
The title reminds me of the single lesson I retained from a training for upcoming people managers: "You can delegate everything except accountability."
andai•1h ago
The whole point of AI is that we don't have to think anymore. Knowing what's going on is the AI's job.

Not saying that's how it should be, that's just the world I predict in the not too distant future.

I love thinking, but most people I know seem to experience it as a form of physical pain.

Traubenfuchs•1h ago
This person has never worked with several decades old government, bank or tax (return) code where all that's ever done is edge cases and implementations of new laws and capabilities being forever bolted on to each other. Systems that were half migrated from a PL/I / Cobol mess to java 7 by Accenture, until the money ran out with the result being that both systems now exist forever and have to be integrated into each other for years. In the end you have decades old code bases maintained by people with less than 10 years of total work tenure, who will leave for greener pastures soon. No one to ask but some old grumpy grey beard with a royal salary who barely does any work but has some ancient wisdom to share.

No one understanding what's going on inside of complex systems in financially constrained environments built and maintained by average, at best, engineers is the norm and is what keeps the world running.

None of that is a symptom of AI. The only change AI brings is that even first person developers don't know anymore what the fuck they just deployed.

kennethops•1h ago
Cries in defense contractor nosies. (I have)

I wanted to touch on this point but then this post started getting WAY too long.

>the only change AI brings is that even first person developers don't know anymore what the fuck they just deployed.

This to me is going to make your first point 100x worse in every damn way

terseus•1h ago
AI may not be the source of the problem but can make it a hundred times worse.
simgt•1h ago
I agree with all of this, but that's assuming we've reached a plateau. Maybe Claude 6.3 will be able to churn through 10M lines of Java and Cobol, tidy it and convert it to Rust. Or maybe not, but so far the scaling laws are holding.
bluetomcat•1h ago
We went from expressing computation via formal, mostly non-ambiguous languages with strict grammar and semantics, to a fuzzy and flaky probabilistic system that tries to mimic code that was already written. What could go wrong?
gwynforthewyn•1h ago
Honestly, the post itself reads very generated, very rage bate. I have so much more faith in us and our hobby/industry than this blog post.

There are reports of industries trying to use these tools to generate as much as possible, sure. There are also people generating bad art and unpleasant prose and using llms to generate nonsense they don’t pay attention to.

I don’t see why that implies that you or I lost interest in tinkering with toys we build. If I want to spend 4 weeks understanding oauth a little better by implementing a client, I still can and I still do.

Automating our builds absolutely didn’t create a cathedral of complexity while nobody noticed. It did mean I can open an Free Software project, read the build file and understand how to build the thing. That’s the opposite of generating complexity.

I worry about our future generations as much as the next person, but this low effort pabulum doesn’t represent the thoughtful industry and hobby that I love.

kennethops•1h ago
>Honestly, the post itself reads very generated, very rage bate. I have so much more faith in us and our hobby/industry than this blog post.

Don't get me wrong. I think the future is very bright for software. I have friends who are scientist and biomedical professionals and I am excited to see what they are able to do with the powers of software where they don't need to care about syntax and can only lead with their intentions.

The rage bait part is mainly my frustrations manifesting. As an SRE my annoyances came off a little bit when it comes to how fast developers are shipping vs how fast our rails can maintain things.

raesene9•1h ago
I don't think the hobbyist interest will go away, but I can see what's happening affecting business that use software.

For most businesses, software is just a means to an end, they don't really care how high quality and thoughtful the systems they use are (e.g. look at any piece of "enterprise" software)

What LLMs have done is made much much easier for orgs to launch new features and services both internally and externally, without necessarily understanding the complexity.

For me, that's what this post tapped into. Many orgs already have more complexity than they can reasonably handle. Massively accelerating development, is not going to make that problem better :)

DaedalusII•1h ago
eventually i realised it was cheaper just to vibecode and buy put options over my company

by managing the risk of failure and technical debt with a financial instrument, i have a lot more freedom to move fast and break things, and scale aggressively

dvsfish•1h ago
Would this not be considered insider trading? Serious question
DaedalusII•1h ago
it is if you have specific event driven knowledge that the put options would gain from

but if you just buy deep out of the money put options every year, say 50% price drop, in a 10b5-1 structure, its ok. you will get sued a bit more than usual

not much different than buying life insurance, except if your company crashes like monday.com , some schmuck who sold you the puts will have to pay you a tonne of cash then you can do a dilutionary rights issue or just use the cash buy a boat in miami and start something else

andai•1h ago
There's a funny angle to all this. There was an article last year where the author asked AI for a web app. It installed a gigabyte of node modules and crashed on startup.

He told it to calm down and just use php, it gave him 100 lines with no dependencies that worked the first time.

The Pieter Levels stack :)

Of course, this is ideal for solo entrepreneur. If you are employed, then you cannot finish it in 100 lines. How will you get paid to maintain it for the next ten years, and hire all your friends to help you?

I think this difference in incentives explains most of what we've been complaining about for the last twenty years.

jollyllama•29m ago
You don't have to go whole-hog and go back to PHP, a reasonable Django application would suffice :)

But the point remains: the NPM monoculture is indefensible.

iammjm•1h ago
Should the goal really be to build a system that we completely understand, or build a system that solves a problem? Like we dont fully understand quantum physics, yet good enough to build helpful systems on top of it. Or like not knowing exactly what every bee in a hive does at any moment, yet still reliably harvesting honey in the end? I think people have this modernist desire for absolute truths and certainty, where the world we live in clearly is postmodern. There are no certainties, only probabilities. So embrace the chaos, try to build systems that help to contain entropy for some useful purposes, and accept that all of them will eventually fail in some way and you will need to course correct. Faulkner is dead, long live Pynchon
bluGill•1h ago
Someone needs to verify it works enough to trust the output. Of course some things are more critical than others. I don't worry too much about a badly written game, just is it fun - but I still don't want it to delete or transfer my money while I'm playing. However there are also systems where people die if it fails and those need to have a lot more trust/understanding.
leecommamichael•1h ago
Whoa a CEO writing about why their product is especially important in this very moment!
bluGill•1h ago
You cannot understand everything. That has been the case since long before AI. I have a vague idea how the linux kernel works, and I could figure it out (I once found and fixed a bug in FreeBSD device drivers) - but I don't, I just trust it works. I've never looked at sqlite to understand how it works - I know enough SQL to be dangerous and trust it works. I know very in depth how the logging framework of my project works - maintaining that code is part of my day job and so I need to know, but the hundreds of other developers in the company that use it trust it works. Meanwhile my co-workers are writing code that I don't understand, I trust they do it well until proven otherwise.

AI is very useful, but it so far doesn't write the type of code I can trust. Thus I use it but I carefully review everything it does.

kennethops•1h ago
>You cannot understand everything.

I 100% agree with this in a individual person sense, but in a humanity sense someone does understand linux very deeply and is very intentional on how they change it which to me is how I gain trust in it.

does trust change when the entire SLDC is AI?

bluGill•45m ago
Linux is less then 40 years old. Most of the people who designed it are still alive. How will the situation be in 40 years when the current maintainers are dead? (reiserfs comes to mind - it was just becoming great when [censored] and filesystems in linux went backward for many years, will that be allowed to happen next time?)

There are systems still in use from the 1960s (maybe before) - the original authors are at least retired and likely dead. I question how well the replacements understand all that. Sure they have had to dig in and understand some parts, but what about the parts that just keep working and don't need new features?

aetherson•31m ago
Genuine question: is there a big inherent difference between "I don't understand this thing but I think this other human does," and "I don't understand this but I think this other AI does"?

If your answer is "yes," do you think that's inherent to the (metaphysical?) fact of it being AI or to specific limitations to current AI? If the latter, what changes to AI would let you trust it?

bluGill•13m ago
I don't know. AI has an understand of some really complex things, but it also does some really stupid things. Depending on which it did most recently for me I change my answer.

The question is does AI understand well enough to maintain that thing for whatever maintenance I need to do in the future?

valdork59•18m ago
"In short, I suggest that the programmer should continue to understand what he is doing, that his growing product remains firmly within his intellectual grip. It is my sad experience that this suggestion is repulsive to the average experienced programmer, who clearly derives a major part of his professional excitement from not quite understanding what he is doing. In this streamlined age, one of our most undernourished psychological needs is the craving for Black Magic and apparently the automatic computer can satisfy this need for the professional software engineer, who is secretly enthralled by the gigantic risks he takes in his daring irresponsibility. For his frustrations I have no remedy......"
whynotmaybe•1h ago
I'm still balancing on whether we "need" to know what's happening.

Very few understand deeply what's happening within the computer between the cou and the bridges and the rest.

The fdiv bug in 1994 took us all by surprise because we were in a situation where bug couldn't exist in hardware, because it either works or it doesn't.

When I'm using firebase or aws, I don't know the underlying system, I don't know why some resources can be created with an underscore or other can't start with a number.

Yet it works.

We're working in layers where usually we only touch the last one. Yes, understanding the others is great to debug.

I'm even wondering whether we need tests when they are written by the same llm that wrote the code.

bluefirebrand•1h ago
> I'm still balancing on whether we "need" to know what's happening.

Of course we do. Otherwise we start trying to water crops with Brawndo

> Very few understand deeply what's happening within the computer between the cou and the bridges and the rest

But it's very very important that those people deeply understand it. We cannot replace their actual knowledge with LLM approximations of their knowledge

seethishat•1h ago
Abstractions have been happening since the 1970s when ASM was replaced by the C Programming Language. From there we got C++ (look, it actually has a string type that most humans understand!) then we got memory safe managed languages like Go that is almost human readable, runs almost everywhere and doesn't have buffer overflows.

ASM was machine specific. C was portable but required expert programmers. C++ was even more user friendly, but still very hard for normal people. Today, most anyone can write a program in Go.

The more we abstract, the less knowledge/expertise is needed. So yes, programs are being built by people who don't really understand what they are doing. That is intended.

FromTheFirstIn•27m ago
Abstractions truncate the decision space of the layer above them by making understandable trade offs. LLMs don’t abstract anything, your code is still in python or php or Go. It just feels like they abstract if you don’t understand the output since not understanding the layer down is what we associate with non-leaky reliable abstractions. LLMs are abstractions the same way that your code editor is an abstraction- it’s not a layer, it’s an interface.
philipstorry•25m ago
Not a bad article - thanks!

Others are pointing out that you cannot understand everything - and that's true enough.

But you only need to understand what's important. The experience of a good expert helps you to find that out.

As a systems administrator the recent AWS outage in the Middle East is the best recent example. There will be roughly three types of companies, separated by their understanding:

- Don't Understand - these companies thought that the cloud would handle this kind of thing for them, and are probably going to be doing a lot of finger-pointing in the near future.

- Do Understand, Don't Care - these companies did understand that high availability meant going multi-region, but decided against it for whatever reason. Probably cost vs perceived likelihood. These companies know that they've made a mistake. Short term they're wondering how to survive it, long term they'll be re-assessing their risk acceptance. Many may decide to stay single-region, but at least understand why.

- Do Understand, Do Care - these companies will simply be checking that their procedures worked for any manual parts of their failover, plus possibly looking at any improvements they can make given the real-life experience they've gained.

An LLM is just going to tell you how to implement it. It's not going to be thinking "what sort of availability do we require?", it may never start that conversation unless explicitly prompted. And even then it's going to return consensus opinions, which may not be what you want when evaluating risk.

I'd love to think a lot of companies will be looking at this event and updating their own risk register or justifying their existing risk decisions for hosting. But let's be honest - most won't even have thought about it, and won't until it goes wrong.