frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Coding after coders: The end of computer programming as we know it

https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?smid=url-share
72•angst•1d ago

Comments

bookofjoe•1d ago
https://www.nytimes.com/2026/03/12/magazine/ai-coding-progra...
jazz9k•1d ago
Because they are still making the same salary. In 5 years, when their job is eliminated, and they can't find work, they will regret their decision.
chrisra•1h ago
Their decision to... use AI for coding?
lelanthran•1h ago
Well, their position on AI.

By their own accounts they are just pressing enter.

ripe•1d ago
> it could also be that these software jobs won’t pay as well as in the past, because, of course, the jobs aren’t as hard as they used to be. Acquiring the skills isn’t as challenging.

This sounds opposite to what the article said earlier: newbies aren’t able to get as much use out of these coding agents as the more experienced programmers do.

kittikitti•1h ago
This article is ragebaiting people and it's an embarrassing piece from the NYT.
ramesh31•1d ago
Because we love tech? I'm absolutely terrified about the future of employment in this field, but I wouldn't give up this insane leap of science fiction technology for anything.
bigstrat2003•1h ago
I love tech - tech that actually works well. The current tech we have for AI does not, so I'm not excited about it.
kittikitti•1h ago
"One such test for Python code, called a pytest"

The brain rot from the author couldn't even think of "unit test".

mkehrt•1h ago
Why would you expect a reporter to magically know what a "unit test" is? Sounds like a simple miscommunication with one of his sources. Not perfect but not "brain rot".
hn_acc1•1h ago
A really good pattern-matching engine is an "insane leap of science fiction"? It saves me a bit of typing here and there with some good pattern matching. Trying to get it to do anything more than a few lines gives me gibberish, or an infinite loop of "Oh, you're right, I need to do X, not Y", over and over - and that's Opus 4.5 or whatever the recent one is.

Would you give it access to your bank account, your 401k, trust it to sell your house, etc? I sure wouldn't.

deflator•1d ago
What is a coder? Someone who is handed the full specs and sits down and just types code? I have never met such a person. The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.
theshackleford•2h ago
> The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.

I’ve tended to hold the same opinion of what the average SWE thinks everyone else does.

pjmlp•1h ago
Never worked on offshoring projects? That is exactly what the sweatshop coders do.
recursivedoubts•1h ago
I think that the current AI tooling is a much bigger threat to offshore sweatshops than to domestic programmers.

Why deal with language barriers, time shifts, etc. when a small team of good developers can be so much more productive, allegedly?

pjmlp•1h ago
It certainly is,

https://www.theregister.com/2026/01/19/hcl_infosys_tcs_wipro...

neonate•2h ago
Other gift link: https://www.nytimes.com/2026/03/12/magazine/ai-coding-progra...
zjp•1h ago
There is no such thing as "after coders": https://zjpea.substack.com/p/embarrassingly-solved-problems

This excerpt:

>A.I. had become so good at writing code that Ebert, initially cautious, began letting it do more and more. Now Claude Code does the bulk of it.

is a little overstated. I think the brownfield section has things exactly backwards. Claude Code benefits enormously from large, established codebases, and it’s basically free riding on the years of human work that went into those codebases. I prodded Claude to add SNFG depictions to the molecular modeling program I work on. It couldn’t have come up with the whole program on its own and if I tried it would produce a different, maybe worse architecture than our atomic library, and then its design choices for molecules might constrain its ability to solve the problem as elegantly as it did. Even then, it needed a coworker to tell me that it had used the incorrect data structure and needed to switch to something that could, when selected, stand in for the atoms it represented.

Also this:

>But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose.

Isn’t really true. It’s the free-riding problem again. The thing about an ESP is that the LLM has the advantage of either a blank canvas (if you’re using one to vibe code a startup), or at least the fact that several possibilities converge on one output, but, genuinely, not all of those realities include good coding architecture. Models can make mistakes, and without a human in the loop those mistakes can render a codebase unmaintainable. It’s a balance. That’s why I don’t let Claude stamp himself to my commits even if he assisted or even did all the work. Who cares if Claude wrote it? I’m the one taking responsibility for it. The article presents Greenfield as good for a startup, and it might be, but only for the early, fast, funding rounds, when you have to get an MVP out right now. That’s an unstable foundation they will have to go back and fix for regulatory or maintenance reasons, and I think that’s the better understanding of the situation than framing Aayush’s experience as a user error.

Even so, “weirdly jazzed about their new powers” is an understatement. Every team including ours has decades of programmer-years of tasks in the backlog, what’s not to love about something you can set to pet peeves for free and then see if the reality matches the ideal? git reset --hard if you don't like what it does, and if you do all the better. The Cuisy thing with the script for the printer is a perfect application of LLMs, a one-off that doesn’t have to be maintained.

Also, the whole framing is weirdly self limiting. The architectural taste that LLMs are, again, free riding off of, is hard won by doing the work more senior engineers are giving to LLMs instead of juniors. We’re setting ourselves up for a serious coordinated action problem as a profession. The article gestures at this a couple times

The thing about threatening LLMs is pretty funny too but something in me wants to fall back to Kant's position that what you do to anything you do to yourself.

movpasd•1h ago
Regarding LLM's performances on brownfield projects, I thought of Naur's "Programming as Theory Building". He explains an example of a compiler project that is taken over by a team without guidance from the original developers:

> "at [the] later stage the original powerful structure was still visible, but made entirely ineffective by amorphous additions of many different kinds"

Maybe a way of phrasing it is that accumulating a lot of "code quality capital" gives you a lot more leverage over technical debt, but eventually it does catch up.

htx80nerd•1h ago
I spent ~6hrs with Claude trying to fix a web worker bug in a small JS code base Claude made. In the end it failed and I ran out of credits. Claude kept wanting to rip out huge blocks of code and replace entire functions. We never got any closer to a solution. The Claude hype is unreal. My 'on the ground' experience has been vastly different.
kuboble•1h ago
Yes, you can get a project with claude to a state of unrecoverable garbage. But with a little experience you can learn what it's good at and this happens less and less.
zjp•1h ago
That isn't my experience. My code and bug tracker are public, so I have the privilege of being able to paste URLs to tickets into Claude Code with the prompt "what the fuck?" and it usually comes up with something workable on its own.
kittikitti•1h ago
Another trash article from the New York Times, who financially benefit from this type of content because of their ongoing litigation against OpenAI. I think the assumption that developers don't code is wrong. Most software engineers don't even want to code, they are opportunists looking to make money. I have yet to experience this cliff of coding. These people aren't asking for hard enough questions. I have a bunch of things I want AI to build that it completely fails on.

The article could have been written from a very different perspective. Instead, the "journalists" likely interviewed a few insiders from Big Tech and generalized. They don't get it. They never will.

Before the advent of ChatGPT, maybe 2 in 100 people could code. I was actually hoping AI would increase programming literacy but it didn't, it became even more rare. Many journalists could have come at it from this perspective, but instead painted doom and gloom for coders and computer programming.

The New York Times should look in the mirror. With the advent of the iPad, most experts agreed that they would go out of business because a majority of their revenue came from print media. Look what happened.

Understand this, most professional software and IT engineers hate coding. It was a flex to say you no longer code professionally before ChatGPT. It's still a flex now. But it's corrupt journalism when there is a clear conflict of interest because the NYT is suing the hell out of AI companies.

hn_acc1•59m ago
Agreed - just like the Fortune article talking about (Edit: Morgan Stanley, not GS) saying "the AI revolution is coming next year, and will decimate tons of industries, and no one is ready for it". They quote Altman and Musk. Gee - what did you expect from those two snake-oil salesmen?
htx80nerd•1h ago
You have to hold AI hand to do even simple vanilla JS correctly. Or do framework code which is well documented all over the net. I love AI and use it for programming a lot, but the limitations are real.
keeganpoppen•1h ago
that's just not even remotely my experience. and i am ~20k hours into my programming career. ai makes most things so much faster that it is hard to justify ever doing large classes of things yourself (as much as this hurts my aesthetic sensibilities, it simply is what it is).
lumost•1h ago
Part of this depends on if you care that the AI wrote the code "your way." I've been in shops with rather exotic and specific style guides and standards which the AI would not or will not conform to.
seanmcdirmid•1h ago
Not in my experience. But then again, lots of programmers are limited in how they use AI to write code. Those limitations are definitely real.
GalaxyNova•1h ago
Not what I've experienced
sp00chy•1h ago
Exactly that is also my experience also with Claude Code. It can create a lot of stuff impressively but with LOTS of more code than necessary. It’s not really effective in the end. I have more than 35 years of coding experience and always dig into the newest stuff. Quality wise it’s still not more than junior dev stuff even with latest models, sorry. And I know how to talk to these machines.
TuxSH•56m ago
I don't have as many years of professional experience as you do, but IMO code pissing is one of the areas LLMs and "agentic tools" shine the least.

In both personal projects and $dayjob tasks, the highest time-saving AI tasks were:

- "review this feature branch" (containing hand-written commits)

- "trace how this repo and repo located at ~/foobar use {stuff} and how they interact with each other, make a Mermaid diagram"

- "reverse engineer the attached 50MiB+ unstripped ELF program, trace all calls to filesystem functions; make a table with filepath, caller function, overview of what caller does" (the table is then copy-pasted to Confluence)

- basic YAML CRUD

Also while Anthropic has more market share in B2B, their model seems optimized for frontend, design, and literary work rather than rigorous work; I find it to be the opposite with their main competitor.

Claude writes code rife with safety issues/vulns all the time, or at least more than other models.

wek•1h ago
This is not my experience either. If you put the work in upfront to plan the feature, write the test cases, and then loop until they pass... you can build a lot of high quality software quickly. The difference between a junior engineer using it and a great architect using it is significant. I think of it as an amplifier.
moezd•1h ago
AI assisted code can't even stick to the API documentation, especially if the data structures are not consistent and have evolved over time. You would see Claude literally pulling function after function from thin air, desperately trying to fulfill your complicated business logic and even when it's complete, it doesn't look neat at all. Yes, it will have test coverage, but one more feature request will probably break the back of the camel. And if you raise that PR to the rest of your team, good luck trying to summarise it all to your colleagues.

However if you just have an easy project, or a greenfield project, or don't care about who's going to maintain that stuff in 6 months, sure, go all in with AI.

jcranmer•45m ago
I must say, I do love how this comment has provoked such varying responses.

My own observations about using AI to write code is that it changes my position from that of an author to a reviewer. And I find code review to be a much more exhausting task than writing code in the first place, especially when you have to work out how and why the AI-generated code is structured the way it is.

lelanthran•1h ago
This is a very one-sided article, unashamedly so.

Where's the references to the decline in quality and embarrassing outages for Amazon, Microsoft, etc?

dboreham•1h ago
Everything you read is in service of someone's business model.
esafak•1h ago
Do we know that it decreased the quality, or introduced more opportunities for bugs by simply increasing the velocity? If every commit has a fixed probability of having a bug, you'll run into more bugs in a week by going faster.
pydry•55m ago
Do we know it increased the velocity and didnt just churn more slop?

Even before AI the limiting factor on all of the teams I ever worked on was bad decisions, not how much time it took to write code. There seem to be more of those these days.

fixxation92•1h ago
Conversations of the future...

"Can you believe that Dad actually used to have to go into an office and type code all day long, MAUALLY??! Line by line, with no advice from AI, he had to think all by himself!"

aleph_minus_one•1h ago
> "Can you believe that Dad actually used to have to go into an office and type code all day long, MAUALLY??! Line by line, with no advice from AI, he had to think all by himself!"

Grumpy old man: "That's exactly why our generation was so much smarter than today's whippersnappers: we were thinking from morning to night the whole long day."

lagrange77•1h ago
It's really time that mainstream media picks up on 'agentic coding' and the implications of writing software becoming a commodity.

I'm an engineer (not only software) by heart, but after seeing what Opus 4.6 based agents are capable of and especially the rate of improvement, i think the direction is clear.

thrawa8387336•1h ago
I like 4.6 and agents based on it but can only qualify it as moderately useful.
CollinEMac•1h ago
>but like most of their peers now, they only rarely write code.

Citation needed. Are most developers "rarely" writing code?

dboreham•1h ago
In my direct experience this is mostly true.
thrawa8387336•1h ago
And was true before AI
jcranmer•1h ago
I'd expect that probably less than 10% of my time is spent actually writing code, and not because of AI, but because enough of it is spent analyzing failures, reading documents, participating in meetings, putting together presentations, answering questions, reading code, etc. And even when I have a nice, uninterrupted coding session, I still spend a decent fraction of that time thinking through the design of how I want the change rather than actually writing the code to effect that change.
fraywing•1h ago
I keep getting stuck on the liability problem of this supposed "new world". If we take this as far as it goes: AI agent societies that designs, architects, and maintains the entire stack E2E with little to no oversight. What happens when rogue AIs do bad things? Who is responsible? You have to have fireable senior engineers that understand deep fundamentals to make sure things aren't going awry, right? /s
comrade1234•56m ago
Having an AI is like having a dedicated assistant or junior programmer that sometimes has senior-level insights. I use it to do tedious tasks where I don't care about the code - like today I used it to generate a static web page that let me experiment with the spring-ai chat bot code I was writing - basic. But yesterday it was able to track down the cause of a very obscure bug having to do with a pom.xml loading two versions of the same library - in my experience I've spent a full day on that type of bug and Claud was able to figure it out from the exception in just minutes.

But when I've used AI to generate new code for features I care about and will need to maintain it's never gotten it right. I can do it myself in less code and cleaner. It reminds me of code in the 2000s that you would get from your team in India - lots of unnecessary code copy-pasted from other projects/customers (I remember getting code for an Audi project that had method names related to McDonalds)

I think though that the day is coming where I can trust the code it produces and at that point I'll just by writing specs. It's not there yet though.

xenadu02•31m ago
It's an accelerator. A great tool if used well. But just like all the innovations before it that were going to replace programmers it simply won't.

I used Claude just the other day to write unit test coverage for a tricky system that handles resolving updates into a consistent view of the world and handles record resurrection/deletion. It wrote great test coverage because it parsed my headerdoc and code comments that went into great detail about the expected behavior. The hard part of that implementation was the prose I wrote and the thinking required to come up with it. The actual lines of code were already a small part of the problem space. So yeah Claude saved me a day or two of monotonously writing up test cases. That's great.

Of course Claude also spat out some absolute garbage code using reflection to poke at internal properties because the access level didn't allow the test to poke at the things it wanted to poke at, along with some methods that were calling themselves in infinite recursion. Oh and a bunch of lines that didn't even compile.

The thing is about those errors: most of them were a fundamental inability to reason. They were technically correct in a sense. I can see how a model that learned from other code written by humans would learn those patterns and apply them. In some contexts they would be best-practice or even required. But the model can't reason. It has no executive function.

I think that is part of what makes these models both amazingly capable and incredibly stupid at the same time.

gist•4m ago
For one thing comments here appear to apply to the quality and issues today not potentially going forward. Quality will change quicker than anyone expects. I am wondering how many people at HN remember when the first Mac came out with Mac Paint and then Pagemaker or Quark. That didn't evolve anywhere nearly as quickly as AI appears to be.

Also I am not seeing how anyone is considering that what a programmer considers quality and what 'gets the job done' (as mentioned in the article) matters in any business. (Example with typesetting is original laser printers were only 300dpi but after a short period became 1200dpi 'good enough' for camera ready copy).

Show HN: Channel Surfer – Watch YouTube like it’s cable TV

https://channelsurfer.tv
289•kilroy123•2d ago•112 comments

Can I run AI locally?

https://www.canirun.ai/
659•ricardbejarano•9h ago•188 comments

Hammerspoon

https://github.com/Hammerspoon/hammerspoon
127•tosh•3h ago•46 comments

Mouser: An open source alternative to Logi-Plus mouse software

https://github.com/TomBadash/MouseControl
73•avionics-guy•3h ago•24 comments

Qatar helium shutdown puts chip supply chain on a two-week clock

https://www.tomshardware.com/tech-industry/qatar-helium-shutdown-puts-chip-supply-chain-on-a-two-...
267•johnbarron•9h ago•264 comments

New 'negative light' technology hides data transfers in plain sight

https://www.unsw.edu.au/newsroom/news/2026/03/New-negative-light-technology-hides-data-transfers-...
21•wjSgoWPm5bWAhXB•2d ago•8 comments

Parallels confirms MacBook Neo can run Windows in a virtual machine

https://www.macrumors.com/2026/03/13/macbook-neo-runs-windows-11-vm/
123•tosh•7h ago•152 comments

Stanford researchers report first recording of a blue whale's heart rate (2019)

https://news.stanford.edu/stories/2019/11/first-ever-recording-blue-whales-heart-rate
24•eatonphil•2h ago•15 comments

TUI Studio – visual terminal UI design tool

https://tui.studio/
497•mipselaer•11h ago•264 comments

Show HN: Context Gateway – Compress agent context before it hits the LLM

https://github.com/Compresr-ai/Context-Gateway
43•ivzak•3h ago•29 comments

Elon Musk pushes out more xAI founders as AI coding effort falters

https://www.ft.com/content/e5fbc6c2-d5a6-4b97-a105-6a96ea849de5
164•merksittich•5h ago•188 comments

Using Thunderbird for RSS

https://rubenerd.com/using-thunderbird-for-rss/
35•ingve•3d ago•4 comments

Your phone is an entire computer

https://medhir.com/blog/your-phone-is-an-entire-computer
176•medhir•3h ago•165 comments

John Carmack about open source and anti-AI activists

https://twitter.com/id_aa_carmack/status/2032460578669691171
163•tzury•3h ago•242 comments

Launch HN: Captain (YC W26) – Automated RAG for Files

https://www.runcaptain.com/
39•CMLewis•6h ago•15 comments

The Wyden Siren Goes Off Again: We'll Be "Stunned" by NSA Under Section 702

https://www.techdirt.com/2026/03/12/the-wyden-siren-goes-off-again-well-be-stunned-by-what-the-ns...
285•cf100clunk•5h ago•93 comments

Bucketsquatting is finally dead

https://onecloudplease.com/blog/bucketsquatting-is-finally-dead
285•boyter•13h ago•151 comments

Launch HN: Spine Swarm (YC S23) – AI agents that collaborate on a visual canvas

https://www.getspine.ai/
75•a24venka•8h ago•60 comments

Lost Doctor Who Episodes Found

https://www.bbc.co.uk/news/articles/c4g7kwq1k11o
165•edent•16h ago•49 comments

Source code of Swedish e-government services has been leaked

https://darkwebinformer.com/full-source-code-of-swedens-e-government-platform-leaked-from-comprom...
179•tavro•12h ago•175 comments

Meta Platforms: Lobbying, dark money, and the App Store Accountability Act

https://github.com/upper-up/meta-lobbying-and-other-findings
1108•shaicoleman•11h ago•468 comments

Exploring JEPA for real-time speech translation

https://www.startpinch.com/research/en/jepa-encoder-translation/
6•christiansafka•2d ago•0 comments

The wild six weeks for NanoClaw's creator that led to a deal with Docker

https://techcrunch.com/2026/03/13/the-wild-six-weeks-for-nanoclaws-creator-that-led-to-a-deal-wit...
47•wateroo•2h ago•4 comments

Hyperlinks in terminal emulators

https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda
74•nvahalik•18h ago•50 comments

You deleted everything and AWS is still charging you?

https://jvogel.me/posts/2026/aws-still-charging-you/
17•ke4qqq•3h ago•9 comments

The Accidental Room (2018)

https://99percentinvisible.org/episode/the-accidental-room/
18•blewboarwastake•3h ago•1 comments

Okmain: How to pick an OK main colour of an image

https://dgroshev.com/blog/okmain/
216•dgroshev•4d ago•42 comments

Executing programs inside transformers with exponentially faster inference

https://www.percepta.ai/blog/can-llms-be-computers
279•u1hcw9nx•1d ago•112 comments

E2E encrypted messaging on Instagram will no longer be supported after 8 May

https://help.instagram.com/491565145294150
335•mindracer•8h ago•171 comments

Militaries are scrambling to create their own Starlink

https://www.newscientist.com/article/2517766-why-the-worlds-militaries-are-scrambling-to-create-t...
61•mooreds•4h ago•96 comments