frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hacker News front page now, but the titles are honest

https://dosaygo-studio.github.io/hn-front-page-2035/news-honest.html
999•keepamovin•2h ago•223 comments

Garage – An S3 object store so reliable you can run it outside datacenters

https://garagehq.deuxfleurs.fr/
133•ibobev•2h ago•20 comments

Cursor Acquires Graphite

https://graphite.com/blog/graphite-joins-cursor
97•timvdalen•1h ago•52 comments

GotaTun -- Mullvad's WireGuard Implementation in Rust

https://mullvad.net/en/blog/announcing-gotatun-the-future-of-wireguard-at-mullvad-vpn
374•km•6h ago•85 comments

Amazon will allow ePub and PDF downloads for DRM-free eBooks

https://www.kdpcommunity.com/s/article/New-eBook-Download-Options-for-Readers-Coming-in-2026?lang...
340•captn3m0•7h ago•182 comments

The FreeBSD Foundation's Laptop Support and Usability Project

https://github.com/FreeBSDFoundation/proj-laptop
68•mikece•2h ago•25 comments

TikTok Deal Is the Shittiest Possible Outcome, Making Everything Worse

https://www.techdirt.com/2025/12/19/tiktok-deal-done-and-its-somehow-the-shittiest-possible-outco...
145•lateforwork•1h ago•90 comments

Show HN: I Made Loom for Mobile

https://demoscope.app
15•admtal•44m ago•6 comments

Believe the Checkbook

https://robertgreiner.com/believe-the-checkbook/
25•rg81•2h ago•5 comments

Beginning January 2026, all ACM publications will be made open access

https://dl.acm.org/openaccess
1909•Kerrick•1d ago•230 comments

Show HN: Stepped Actions – distributed workflow orchestration for Rails

https://github.com/envirobly/stepped
62•klevo•5d ago•9 comments

Prepare for That Stupid World

https://ploum.net/2025-12-19-prepare-for-that-world.html
25•speckx•51m ago•11 comments

Texas is suing all of the big TV makers for spying on what you watch

https://www.theverge.com/news/845400/texas-tv-makers-lawsuit-samsung-sony-lg-hisense-tcl-spying
1109•tortilla•2d ago•553 comments

Programming language speed comparison using Leibniz formula for π

https://niklas-heer.github.io/speed-comparison/
14•PKop•4d ago•8 comments

Does my key fob have more computing power than the Lunar lander?

https://www.buzzsprout.com/2469780/episodes/18340142-17-does-my-key-fob-have-more-computing-power...
22•jammcq•5d ago•16 comments

Building a Transparent Keyserver

https://words.filippo.io/keyserver-tlog/
32•noident•2h ago•8 comments

We pwned X, Vercel, Cursor, and Discord through a supply-chain attack

https://gist.github.com/hackermondev/5e2cdc32849405fff6b46957747a2d28
1043•hackermondev•22h ago•382 comments

Getting bitten by Intel's poor naming schemes

https://lorendb.dev/posts/getting-bitten-by-poor-naming-schemes/
232•LorenDB•12h ago•123 comments

1.5 TB of VRAM on Mac Studio – RDMA over Thunderbolt 5

https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-studio-rdma-over-thunderbolt-5
540•rbanffy•19h ago•193 comments

I have to give Fortnite my passport to use Bluesky

https://spitfirenews.com/p/why-i-have-to-give-fortnite-my-passport-to-use-bluesky
62•malshe•1h ago•55 comments

Noclip.website – A digital museum of video game levels

https://noclip.website/
376•ivmoreau•15h ago•48 comments

How to think about durable execution

https://hatchet.run/blog/durable-execution
72•abelanger•1w ago•24 comments

From Zero to QED: An informal introduction to formality with Lean 4

https://sdiehl.github.io/zero-to-qed/01_introduction.html
120•rwosync•5d ago•15 comments

History LLMs: Models trained exclusively on pre-1913 texts

https://github.com/DGoettlich/history-llms
662•iamwil•19h ago•323 comments

GPT-5.2-Codex

https://openai.com/index/introducing-gpt-5-2-codex/
549•meetpateltech•23h ago•300 comments

Pingfs: Stores your data in ICMP ping packets (2020)

https://github.com/yarrick/pingfs
72•linkdd•5d ago•25 comments

Designing a Passive Lidar Detector Device

https://www.atredis.com/blog/2025/11/20/designing-a-passive-lidar-detection-sensor
51•speckx•3d ago•4 comments

How China built its ‘Manhattan Project’ to rival the West in AI chips

https://www.japantimes.co.jp/business/2025/12/18/tech/china-west-ai-chips/
423•artninja1988•22h ago•520 comments

Prompt caching for cheaper LLM tokens

https://ngrok.com/blog/prompt-caching/
224•samwho•3d ago•50 comments

Graphite Is Joining Cursor

https://cursor.com/blog/graphite
61•fosterfriends•1h ago•21 comments
Open in hackernews

Using AI Generated Code Will Make You a Bad Programmer

https://unsolicited-opinions.rudism.com/bad-programmer/
77•speckx•2h ago

Comments

crimsoneer•1h ago
I think this is a slightly silly take.

Yes, taking the bus to work will make me a worse runner than jogging there. Sometimes, I just want to get to a place.

Secondly, I'm not convinced the best way to learn to be a good programmer is just to do a whole project from 0 to 100. International practice is a thing.

oofbey•1h ago
Using a compiler will also make you much worse at writing assembly code. Doesn’t bother me at all. Haven’t written any assembly since the 20th century.
29ebJCyy•1h ago
Having someone else write the code is about as far from intentional practice as can be.

I do think the “becoming dependent on your replacement” point is somewhat weak. Once AI is as good as the best human at programming (which I think could still be many years away), the conversation is moot.

takira•1h ago
Agreed mostly, especially in terms of efficiency. I have, however, been seeing more people recently with a built in dependency on their IDEs to solve their problems.
mrkeen•1h ago
Maybe it's more like being a taxi driver using a self-driving car.
darkwater•1h ago
Yep, this is the only analogy that makes sense. And if, like in the taxi situation, you are the owner of the taxi license, then you win because you keep making money but now you don't have to drive. But if OTOH you are just driving for a salary, bad news, you need to find another job now. Maybe if you are a very good driver, good looking and with good manners, some rich guy can hire you as his personal driver but otherwise...
OptionOfT•1h ago
No, it's more akin to running around the neighborhood for 3 miles vs driving to the gym and run on a treadmill there for 3 miles.
PunchyHamster•1h ago
AI bus sometimes will just decide to take you to different city on a whim tho
true2octave•1h ago
> It's probably fine--unless you care about self-improvement or taking pride in your work.

I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time

darkwater•1h ago
And you/we will be replaced by an AI that will solve the business problem (the day they get so good to actually do that, which might happen or not but... who knows?)
rybosworld•1h ago
And the person that hand-writes the code won't be replaced?
darkwater•1h ago
Yes, as well.

There aare probably 2 ways to see te future of LLMs / AI: they are either going to have the capabilities to replace all white collar work, or they are not.

If you think they are going to replace us, then yo ucan either surrender or fight back, and personally I read all these anti-AI posts as fighting back, to help people realize we might be digging our own grave.

If, OTOH, you see AI as a force-multiplier tool that's never going to completely replace a human developer then yes, probably the smartest thing to do is to learn how to master this new tool, but at the same time keep in mind the side effects it might bring, like atrophy.

shadowgovt•59m ago
My personal goal has been to dig that grave ever since I could hold a shovel.

We've always been in the business of replacing humans in the 3-D's space (dirty, dangerous, dull... And to be clear. data manipulation for its own sake is dull). If we make AI that replaces 90% of what I do at my desk every day... We did it. We realized the dream from the old Tom Swift novels where he comes up with an idea for an invention and hands the idea off to his computer to extrapolate it, or the ship's computer in Star Trek acting like a perfect engineering and analytical assistant to take fuzzy asks from humans and turn them into useful output.

rybosworld•44m ago
The problem is that this time, we're creating a competing intelligence that in theory could replace all work, AND, that competing intelligence is ultimately owned/controlled by a few dozen very rich guys.

They aren't going to willingly spread the wealth.

rybosworld•57m ago
Realistically I think the only way to fight back is unions.
sallveburrpi•1h ago
I really really hope an AI will do this work and solve all the “business problems” so I can go and be a goat herder
noman-land•1h ago
Go herd goats. You don't need to wait for AI to destroy your livelihood.
defterGoose•1h ago
Yeah, but there's nothing like some sweet, sweet justification.
shadowgovt•1h ago
Herding goats doesn't solve the interesting technical problem I'm trying to solve.

Point is: if that problem is solvable without me, that's the win condition for everyone. Then I go herd goats (and have this nifty tool that helps me spec out an optimal goat fence while I'm at it).

lelanthran•25m ago
> Point is: if that problem is solvable without me, that's the win condition for everyone.

The problem is solvable without you. I don't even need to know what the problem actually is, because the odds of you being one of the handful of the people in the world who are so critical that the world notices their passing is so low, I have a better chance of winning a lottery jackpot than of you being some critical piece of some solution.

sallveburrpi•44m ago
I need that sweet AI-enabled UBI first to do it comfortably
Tade0•49m ago
I was thinking of buying land and planting beetroot, which I would be picking by hand, cutting into thin plasters and freeze-drying for sale.

I have buy-in from a former co-worker with whom I remained in touch over the years, so there will be at least two of us working the fields.

sallveburrpi•42m ago
I unironically have a 5 year plan to get out of tech and into something more “real”. I want to work on something that helps actual humans not these “business problems”
lelanthran•27m ago
I'm skeptical of claims like this.

After all, you can go and be a goat herder right now, and yet you are presumably not doing this.

Nothing is stopping you being a goat herder - the place that is paying you for solving business problems will continue just fine if you leave, after all. Your presence there is not required.

tomjen3•1h ago
For me AI is really powerful autocomplete. Like you said, I wrote the abstraction years ago. Writing the abstraction again now is not required.

A time and place may come where the AI are so powerful I’m not needed. That time is not right now.

I have used Rider for years at this point and it automatically handles most imports. It’s not AI, but its one of the things that is just not needed for me to even think about.

saubeidl•1h ago
Maybe you become worse at solving business problems with technology once you let that muscle atrophy?
hudon•1h ago
and a teacher is hired to teach, but some self-improve so they may become headmaster
shams93•1h ago
I agree, I was always annoyed in projects where these kids thought they were still in school and spinning up incredible levels of over abstraction that led to some really horrible security problems.
wrs•55m ago
Are you a consultant? Because otherwise there’s a thing called a “career ladder”, and you are very much being paid to self-improve. And if you don’t, that’s going to feature prominently in your next promotion review.
avgDev•52m ago
I love to code, like fun code, solving a relatively small concrete problem with code feels rewarding to me....however, writing business code on the other hand? Not really.

I do however, love solving business problems. This is what I am hired for. I speak to VP/managers to improve their day to day. I come up with feasible solution and translate them into code.

If AI could actually code, like really code(not here is some code, it may or may not work go read documentation to figure out why it doesn't), I would just go and focus on creating affordable software solutions to medium/small businesses.

This is kind of like gardening/farming, before industrial revolution most crops required a huge work force, these days with all the equipment and advancements a single farmer can do a lot on their own with small staff. People still "hand" garden for pleasure, but without using the new tech they wouldn't be able to compete on a big scale.

I know many fear AI, but it is progress and it will never stop. I do think many devs are intelligent and will be able to evolve in the workplace.

lelanthran•29m ago
> I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time

So, this "solve business problems" is some temporary[1] gig for you?[2]

------------------------------

[1] I'm reminded of the anti-union people who are merely temporarily embarrassed millionaires.

[2] Skills atrophy. Maybe you won't need the atrophied skill in the future, but how sure are you that this is the case? The eventual outcome?

alexgotoi•1h ago
Oh no, AI-generated code will make me a bad programmer? Thank God I’ve been hand-crafting my 500 line regex monstrosities for 20 years—clearly the gold standard. Next you’ll tell me copy pasting Stack Overflow turns me into a cargo cultist. Wake up: bad programmers existed since punch cards; AI just speeds up the Darwinian cull. Use it to boilerplate the boring bits, then actually grok what it spits out. Or keep Luddite-posting while the rest of us ship.
frizlab•1h ago
> I love to write code. I very much do not love to read, review, and generate feedback on other people's code. I understand it's a good skill to develop and can be instrumental in helping to shape less experienced colleagues into more productive collaborators, but I still hate it.

Same. Writing code is easy. Reading code is very very hard.

eduction•1h ago
What an odd thing for them to put in the article. This is an example of AI generated code making someone a better programmer (by improving their ability to read code and give feedback). So it contradicts the title.

They could rename it "Using AI Generated Code Makes Programming Less Fun, for Me", that would be more honest.

The problem for programmers is (as a group) they tend to dislike the parts of their job that are hardest to replace with AI and love the stuff that is easiest for machines to copy. It turns out meetings, customer support, documentation, tests, and QA are core parts of being a good engineer.

dahateb•22m ago
I find a that actually a disturbing assumption. I've learned a lot from reading other peoples code, seeing how they were thinking and spotting errors, so the good and the bad. I believe that in order to actually write good code its important to actually understand what is the context of the task which basically requires a lot of code reading, which is also sometimes quite enjoyable when you have competent authors. Reading code is an essential part of the game. If you cannot do that you'll just create huge balls of mud with or without ai usage. Though using ai will speedrun the mud so yeah, there is an argument for not using it.
kevin42•1h ago
Is it just me, or does anyone else use AI not just to write code, but to learn. Since I've been using Claude I've learned a lot about Rust by having it build things for me, then working with that code. I've never been a front end guy, but I had it write a Chrome plugin for me, then I used that code to learn how it works. It's not a black box to me, but I don't need to look up some CSS stuff I've never used. I can prompt Claude to write it and then I can look at it then "Huh, that's how it works". Better than researching it myself, I can see an example of exactly how it's done, then I learn from that.

I'm doing a lot of new things I never would have done before. Yes, I could have googled APIs and read tutorials, but I learn best by doing, and AI helps me learn a lot faster.

okokwhatever•1h ago
This is the smartest answer in this polarized thread
RationPhantoms•1h ago
Absolutely. It's a tireless rubik's cube. One that you can rotate endlessly to digest new material. It doesn't sigh heavily or not have the mental bandwidth to answer. Yes, it should not be trusted with high precision information but the world can get by quite well on vibes.
mmoll•1h ago
That may be dangerous. The more obscure the topic, the more likely it is that the AI will come up with a working but needlessly convoluted solution.
kevin42•51m ago
Compared to what though? I have ended up with needlessly convoluted solutions when learning something the old-fashioned way before. Then over time, as I learn more, I improve my approach.

Not everyone has access to an expert that will guide them to the most efficient way to do something.

With either form of learning though, critical thinking is required.

pdntspa•1h ago
I second this. It's like having a second brain with domain expertise in pretty much anything I could want to ask questions of. And while factual assertions may still be problematic (hallucinations), I can very quickly run code and see if it does what I want or not. I don't care if it hallucinates if it solves my problem with code that is half decent. Which it does.
fpauser•53m ago
> I don't care if it hallucinates if it solves my problem with code that is half decent. Which it does.

Sometimes.

pdntspa•41m ago
Then you don't know how to work with it. Just like a real programmer first-pass code is meh. But then you circle back and have it refine.
xp84•30m ago
A competent developer should be able to read the code, spot any defects in “decency”, and fix them (or indeed, explain as you would to a junior dev how you want it fixed and let AI fix it). And of course they should have tests that should be able to categorically prove that the code does what it is supposed to do.
shadowgovt•1h ago
I have definitely had Claude make recommendations that gave me structural insight into the code that I didn't have on my own, and I integrated that insight.

People who claim "It's not synthesized, it's just other people's work run through a woodchipper" aren't precisely right, but they also aren't precisely wrong... And in this space, having the whole ecosystem of programmers who published code looking over my shoulder as I try to solve problems is a huge boon.

striking•1h ago
I do agree this is where AI shines. If you need a quick rehash of something that's been done a zillion times before or a quick integration between two known good components, AI's great.

But the skills you describe are still skills, reading and researching and doing your own fact finding are still important to practice and be good at. Those things only get more important in situations off the beaten path, where AI doesn't always give you trustworthy answers or do trustworthy work.

I'm still going to nurture some of these skills. If I'm trying to learn, I'll stick to using AI only when I'm truly stuck or no longer having fun.

outside2344•1h ago
I am using AI to learn EVERYTHING. Spanish, code, everything. Honestly, the largest acceleration I am getting is in research towards design docs (which then get used for implementation).
chankstein38•54m ago
I'm curious how the spanish is going! Have you used any interesting methods or are you just kind of talking to it and asking it questions about spanish?
chankstein38•55m ago
Me too! I got into ESP32s and sensors thanks to AI. I wouldn't have had time or energy after stressful work all day but thanks to them I can get firmware written for my projects. Along the way I'm also learning how the firmware has to be written and finding issues with what the AI wrote and correcting them.

If people aren't learning from AI it's their fault. Yeah AI makes stuff up and hallucinates and can be wrong but how is that different than a distracted senior dev? AI is available to me 24/7 to answer my questions in minutes or seconds where half the time when I message people I have to wait 30-60min for a response.

People just need to approach things intelligently and actually learn along the way. You can easily get to the point where you're thinking more clearly about a problem than the AI writing your code pretty quickly if you just pay attention and do the research you need to understand what's happening. They're not as factual as a textbook but they don't need to be to give you the space to ask the right questions and they'll frequently provide sources (though I'd heavily recommend checking them. Sometimes the sources are a joke)

jgbuddy•1h ago
A bad programmer maybe, but a better / faster developer.
PunchyHamster•1h ago
well, faster, till you have to touch old code that now neither you nor AI understands
jf22•1h ago
AI's are amazing at understanding even the utterly horrific bad code.

I've refactored the sloppiest slop with AI in days with zero regressions. If I did it manually it could have taken months.

monkaiju•59m ago
Not in my experience... I am more productive and produce better output than the devs i know that rely on AI tooling
threethirtytwo•1h ago
Exactly 2 years ago I remember people calling AI stochastic parrots with no actual intellectual capability and people on HN weren’t remotely worried that AI would take over there jobs.

I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).

First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.

Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.

I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.

Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.

data-ottawa•1h ago
How is AI not a stochastic parrot? That’s exactly what it is. That never precluded it from being useful.
throwway120385•1h ago
Yeah -- stochastic just implies a probabilistic method. It's just that when you include enough parameters your probabilities start to match the actual space of acceptable results really really well. In other words, we started to throw memory at the problem and the results got better. But it doesn't change the fundamentals of the approach.
RationPhantoms•1h ago
In my experience, it's not that the term itself is incorrect but more so people use it as a bludgeoning force to end conversations about the technology. Rather than, what should happen, is to invite nuance about how it can be utilized and it's pitfalls.
mmoll•1h ago
Exactly. After all, how can WE confidently claim that we’re more than stochastic parrots?
pegasus•1h ago
And a parrot (or human) is not stochastic? The truth is we don't actually know. So the usually included "just" is unjustified.
mjr00•1h ago
But AI is still a stochastic parrot with no actual intellectual capability... who actually believes otherwise? I figured most people had played with local models enough by now to understand that it's just math underneath. It's extremely useful, but laughably far from intelligence, as anyone who has attempted to use Claude et al for anything nontrivial knows.
pegasus•1h ago
Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...
xgulfie•1h ago
"They convinced the investors so they must be right"
mjr00•1h ago
> Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...

The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.

bena•43m ago
It's not that the money can predict what is correct, it's that it can tell us where people's values lie.

Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.

Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.

However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.

There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.

bigstrat2003•44m ago
Sam Altman is the modern day PT Barnum. He doesn't believe a damn thing except "make more money for Sam Altman", and he's real good at convincing people to go along with his schemes. His actions have zero evidential value for whether or not AI is intelligent, or even whether it's useful.
rybosworld•1h ago
Why does it even matter if it is a stochastic parrot? And whose to say that humans aren't also?

Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"

threethirtytwo•1h ago
“It’s just math underneath”

This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.

From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.

The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.

But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.

In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.

mjr00•59m ago
> The ground truth reality is that nobody and I mean nobody understands how LLMs work.

This is a really insane and untrue quote. I would, ironically, ask an LLM to explain how LLMs work. It's really not as complicated as it seems.

rybosworld•52m ago
It's not an insane thing to say.

You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".

The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.

mjr00•50m ago
The only thing we don't fully understand is how the ELIZA effect[0] has been known for 60 years yet people keep falling for it.

[0] https://en.wikipedia.org/wiki/ELIZA_effect

rybosworld•41m ago
> The only thing we don't fully understand is

It seems clear you don't want to have a good faith discussion.

It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.

threethirtytwo•47m ago
https://futurism.com/anthropic-ceo-admits-ai-ignorance

There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.

What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.

Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.

Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.

mjr00•45m ago
That's a CEO of an AI company saying his product is really superintelligent and dangerous and nobody knows how it works and if you don't invest you're going to be left behind. That's a marketing piece, if you weren't aware.

Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.

threethirtytwo•37m ago
Didn’t I say I have tons of evidence?

Here’s another: https://youtube.com/shorts/zKM-msksXq0?si=bVethH1vAneCq28v

Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.

Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.

mjr00•35m ago
I mean at a minimum I understand how they work, even if you don't. So the claim that "nobody and I mean nobody understands how LLMs work" is verifiably false.
threethirtytwo•3m ago
Did you not look at the evidence I posted? It’s not about you or I it’s about humanity. I have two on the ground people who are central to AI saying humanity doesn’t understand AI.

If you understand LLMs then my claim is then you are lying.

I build LLMs for a living, btw. So it’s not just other experts saying these things.. I know what I’m saying on a fundamental level.

fingerlocks•46m ago
Try to use an LLM to a solve a novel problem or within a domain that can’t easily be googled.

It will just spew over-confident sycophantic vomit. There is no attempt to reason. It’s all worthless nonsense.

It’s a fancy regurgitation machine and that will go completely off the rails when it steps outside of it’s training area. That’s it.

threethirtytwo•34m ago
I’ve seen it solve a complex domain specific problem and build a basis of code in 10 minutes what took a year for a human to do. And it did it better.

I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose the evidence or convenient for my world view? What should I do? Actually why don’t you tell me what you ended up doing?

rybosworld•1h ago
About a year ago, another commenter said this in response to the question "Ask HN: SWEs how do you future-proof your career in light of LLMs?":

> "I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time."

Even a year ago that seemed like a ridiculous thing to say. LLM's have made one thing very clear to me: A massive percentage of developers derive their sense of self worth from how smart coding makes them feel.

jmathai•59m ago
AI doesn't need or care about "high quality" code in the same ways we define it. It needs to understand the system so that it can evolve it to meet evolving requirements. It's not bound by tech debt in the same way humans are.

That being said, what will be critical is understanding business needs and being able to articulate them in a manner that computers (not humans) can translate into software.

threethirtytwo•57m ago
Yes. If one thing is universal among people is that they can’t fully accept reality at face value if that reality is violating their identity.

What has to happen first is that people need to rebuild their identity before they can accept what is happening and that rebuilding process will take longer then the rate at which AI is outrunning all of us.

What is my role in tech if for the past 20 years I was a code ninja but now AI can do better than me? I can become a delegator or manager to AI, a prompt wizard or some leadership role… but even this is a target for replacement by AI.

bena•1h ago
So what you're saying is that two years ago, people were saying that AI won't take our jobs. And that it hasn't taken our jobs.

Fascinating.

threethirtytwo•52m ago
It will bro.

It also has already taken junior jobs. The market is hard for them.

mjr00•48m ago
> It also has already taken junior jobs.

Correction: it has been a convenient excuse for large tech companies to cut junior jobs after ridiculous hiring sprees during COVID/ZIRP.

threethirtytwo•32m ago
That’s part of it. You’d be lying to yourself if you think AI didn’t take junior jobs as well.
dragonwriter•40m ago
> It also has already taken junior jobs.

Well, its taken blame for the job cutting due to the broad growth slowdown since COVID fiscal and monetary stimulus was stopped and replaced with monetary tightening, and then most recently the economy was hit with the additional hammers of the Trump tariff and immigration policies, as lots of people want to obscure, deny, and distract from the general economic malaise (and because many of the companies, and even more of their big investors, involved are in incestuous investment relationships with AI companies, so "blaming" AI for the cuts is also a form of self-serving promotion.)

tomku•59m ago
Two years ago there were also hundreds of people constantly panic-posting here about how our jobs would be gone in a month, that learning anything about programming was now a waste of time and the entire profession was already dead, with all other knowledge work guaranteed to follow. People were posting about how they were considering giving up on CS degrees because AI would make them pointless. The people who used language like "stochastic parrots" were regularly mocked by AI enthusiasts, and the AI enthusiasts were then mocked in return for their absurd claims about fast take-off and imminent AGI. It was a cesspool of bad takes coming from basically every angle, strengthening in certainty as they bounced off each other's idiocy.

Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.

bigstrat2003•50m ago
> First AI is clearly not a stochastic parrot

No it very clearly is. Even still today, it is obvious that it has zero understanding of anything and it's just parroting training data arranged in different ways.

kerkeslager•1h ago
I think, for those of us who have been in this industry for 20 years, AI isn't going to magically make me lose everything I learned.

However, for those in the first few years of their career, I'm definitely seeing the problem where junior devs are reaching for AI on everything, and aren't developing any skills that would allow them to do anything more than the AI can do or catch any of the mistakes that AI makes. I don't see them on a path that leads them from where they are to where I am.

A lot of my generation of developers is moving into management, switching fields, or simply retiring in their 40s. In theory there should be some of us left who can do what AI can't for another 20 years until we reach actual retirement age, but programming isn't a field that retains its older developers well. So this problem is going to catch up with us quickly.

Then again, I don't feel like I ever really lived up to any of the programmers I looked up to from the 80s and 90s, and I can't really point to many modern programmers I look up to in the same way. Moxie and Rob Nystrom, maybe? And the field hasn't collapsed, so maybe the next generation will figure out how to make it work.

fellowniusmonk•1h ago
So this author loves the easy part (writing code), hates the hard part (reading and reviewing), and lacks so much self awareness that he is going to lecture people on skill atrophy?

If you want to be an artist be an artist, that's fine, don't confuse artististry with engineering.

I write art code for myself, I engineer code professionally.

The author wraps with a false dichotomy that uses emotionally loaded language at the end: "You Believe We have Entered a New Post-Work Era, and Trust the Corporations to Shepherd Us Into It". I mean, what? Why can't I think it's quickly becoming a new era _and_ not trust corporations? Why does the author take that idea off the table? Is this logic or rhetoric? Who is this author trying to convince?

SWE life has always had smatterings of weird gatekeeping, self identities wrapped up in external tooling or paradigms, fragile egos, general misanthropy, post-hoc rationalization, etc. but... man watching the progressions of the crash outs these last few years has been wild.

shadowgovt•1h ago
This is a key insight.

In my day job, I use best practices. If I'm changing a SQL database, I write database migrations.

In my hobby coding? I will never write a database migration. You couldn't force me to at gunpoint. I just hate them, aesthetically. I will come up with the most elaborate and fragile solutions to avoid writing them. It's part of the fun.

jonas21•1h ago
You could make the same argument that "Using Open-Source Code Will Make You a Bad Programmer" -- and in fact, a generation ago, many people did.
shadowgovt•1h ago
I've also heard similar arguments about "Using stackoverflow instead of RTFM makes you a bad programmer."

These things are all tradeoffs. A junior engineer who goes to the manual every time is something I encourage, but if they go exclusively to only the manual every time they are going to be slower and produce code more disjoint and harder to maintain than their peers who have taken advantage of other people's insights into the things the manuals don't say.

billy99k•56m ago
It doesn't make you a bad developer, it just stops novel and innovative ways of doing something, because the cheaper way is to just use what's free.
ThrowawayR2•45m ago
> "...many people did..."

I'm trying to think of any examples of someone who said that "a generation ago" at all, let alone any that wasn't regarded as a fringe crackpot.

NitpickLawyer•1h ago
I am old enough to have heard this before.

C makes you a bad programmer. Real men code in assembler.

IDEs make you a bad programmer. Real men code in a text editor.

Intellisense / autocomplete makes you a bad programmer. Real men RTFM.

Round and round we go...

HPsquared•1h ago
I'll always remember a lab we had in university where we hand-wrote machine code to do blinkenlights, and used an array of toggle switches to enter each byte into memory by hand.
monkaiju•1h ago
Those are all syntactic changes, AI attempts to be semantic, totally different.
CharlesW•44m ago
The examples are semantic shifts. Assembler → C wasn't just a syntax swap (functions are semantic units, types are meaning, compilation is optimization reasoning, etc.). "Rename this symbol safely across a project" is a semantic transformation. And of course, autocomplete is semantic. AI represents a difference in degree, but not kind. Like the examples cited by the parent, AI further moves us from lower-level semantics to higher-level semantics.
saubeidl•1h ago
All of this is true, but all of the examples that came before were deterministic, so once you understood the abstraction, you still understood the whole thing.

AI is different.

mywittyname•1h ago
My opinion is that these are not analogous.

Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.

It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.

However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.

Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.

johnfn•52m ago
Are you not getting practice working with an LLM? Why would that not also be a skill you can practice?
dogma1138•38m ago
Those are objectively different skills tho.
utopman•52m ago
This.

I am mid career now.

High level langages like js or python have a lot of bad design / suboptimal code... as well as some java code in many places.

Some bad java code (it just needs to be a sql select in a loop) can easily perform thousand time worse than a clean python implementation of the same thing.

As said above, once c was high level programing language and still is in some places.

I do not code in python / go / js that much these days, but what made me a not so bad developper is my understanding of computing mechanism (why and how to use memory instead of disk, how to arange code so cpu can use it's cache efficiently...)

As said in many posts, code quality even for vibe coded stuff depends more on what was prompted and how many efforts the PR diff is human readable to get maintainable and efficient softwares at the end of the day.

Yet senior devs often spend more time reviewing code instead of actually writting some. Vibe coding ultimately feels the same for me at the moment.

I still love to write some code by hand, but I start to feel less and less productive with this approach while at the same time feeling I don't really lost my skills to do so.

I think I really feel and effectively am more efficient at delivering thing with appropriate quality level for my customers now that I have agentic coding skills in my belt.

wrs•51m ago
AI makes you not be a programmer (at least, that seems to be the goal). So it’s a little different from those.

It’s like a carpenter talking about IKEA saying “I remember when I got an electric sander, it’s the same thing”.

jerf•51m ago
The problem I have with a lot of these "oh I've heard it all before"-type posts is that some of what you heard is true. Yes, IDEs did make for some bad programmers. Yes, scripting languages has made for some bad programmers. Yes, some other shortcuts have made for bad programmers.

They haven't destroyed everyone but there definitely are sets of people who used the crutches and never got past them. And not just in a "well they never needed anything more" but worse programmers than they should or could have been.

AndyKelley•27m ago
C doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.

IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.

Intellisense/autocomplete doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.

mjr00•20m ago
I get what you're saying but let's be real: 99.99999% of modern software development is done with constant internet connectivity and is effectively impossible without it. Whether that's pulling external packages or just looking up the name of an API in the standard library. Yeah, you could grep docs, or have a shelf full of "The C++ Programming Language Reference" books like we did in the 90s, but c'mon.

I have some friends in the defense industry who have to develop on machines without public internet access. You know what they all do? Have a second machine set up next to them which does have internet access.

nunez•26m ago
lol all of the latter points are true though (except for it being only men, though i get how things were back in the day)
ThrowawayR2•9m ago
[delayed]
jf22•1h ago
The same way the loom made bad weavers.

Anybody know any weavers making > 100k a year?

ralferoo•48m ago
I guess yours might have been intended to be a facetious comment, but a quick google for designer weaving shows up a UK company as the first hit for me that sells their work for approximately $1500 per square foot.

If the demand for this work is high, maybe the individual workers aren't earning $100k per year, but the owner of the company who presumably was/is a weaver might well be earning that much.

What the loom has done is made the repeatable mass production of items cheap and convenient. What used to be a very expensive purchase is now available to more people at a significantly cheaper price, so probably the profits of the companies making them are about the same or higher, just on a higher volume.

It hasn't entirely removed the market for high end quality weaving, although it probably has reduced the number of people buying high-end bespoke items if they can buy a "good enough" item much cheaper.

But having said that, I don't think weavers were on the inflation-adjusted equivalent of 100k before the loom either. They may have been skilled artisans, but that doesn't mean the majority were paid multiples above an average wage.

The current price bubble for programming salaries is based on the high salaries being worth paying for a company who can leverage that person to produce software that can earn the company significantly more than that salary, coupled with the historic demand for good programmers exceeding supply.

I'm sure that even if the bulk of programming jobs disappear because people can produce "good enough" software for their purposes using AI, there will always be a requirement for highly skilled specialists to do what AI can't, or from companies that want a higher confidence that the code is correct/maintainable than AI can provide.

jf22•43m ago
I think this comment supports the point I was trying to make...
mohsen1•1h ago
I'm just enjoying the last few years of this career. Let me have fun!

Joking aside, we have to understand that this is the way software is being created and this tool is going to be the tool most trivial software (which most of us make) will be created with.

I feel like the industry is telling me: Adopt of become irrelevant

jf22•54m ago
I already miss the fun heads down days of unraveling complex bugs.

Now I'm just telling AI what to do.

wrs•48m ago
Actually kind of worse: adopt and become irrelevant.
xtracto•4m ago
Meh, I am also old enough to have experienced what the GP post mentioned, and I remember also when Visual Basic 6 was released, a similar sentiment appeared:

Suddenly, every cousin 13 year old could implement apps for their Uncle's dental office, laboratory, parts shop billing, tourism office management, etc. Some people also believed that software developers would become irrelevant in couple of years.

For me as an old programmer, I am having A BLAST using these tools. I have used enough tools (TurboBasic, Rational Rose (model based development, ha!), NetBeans, Eclipse, VB6, BorlandC++ builder) to be able to identify their limits and work with them.

alunchbox•1h ago
I get your points here; I've had a similar discussion with my VP of Engineering. His argument is that I'm not hired to write `if` statements, I'm hired to solve problems. AI can solve it faster that's what he cares about at the end of the day.

However I agree there's a different category here under the idea of 'craft'. I don't have a good way to express this. It's not that I'm writing these 'if' statements in a particular way, it's how the whole system is structured and I understand every single line and it's an expression of my clarity of the system in code.

I believe there a split between these two and both are focusing on different problems. Again I don't want to label, but if I *had to* I would say one side is business focused. Here's the thing though - your end customers don't give a fuck if it's built with AI or crafted by hand.

The other side is the craftsmanship, and I don't know how to express this to make sense.

I'm looking for a good way to express this - feeling? Reality? Practice?

IDK, but I do understand your side of it; However, I don't think many companies will give a shit.

If they can go to market in 2 weeks vs 2 month's you know what they'll choose.

rbbydotdev•1h ago
While I agree with much of the sentiment, I believe a point will approach where the amount of code and likely its complexity; due to having been written by ai, will require ai to work with and maintain
monkaiju•1h ago
But theyre worse at navigating nuance and complexity than humans...
antfarm•1h ago
I have always found it way easier to write code than to understand code written by someone else. I use Claude for research and architectural discussions, but only allow it to present code snippets, not to change any files. I treat those the same way I treat code from Stack Overflow and manually adapt them to the present coding guidelines and my aesthetics. Not a recipe for 10x, but it gets road blocks out of the way quickly.
d--b•1h ago
FWIW, AI writes React code much better than I ever could (or would want to know)
ErroneousBosh•59m ago
Yeah. Come and talk to me when AI can actually write code, though.
focusgroup0•59m ago
Adapt, accept, or be replaced
billy99k•58m ago
This is the future of code.

I know plenty of 50-something developers out of work because they stuck to their old ways and the tech world left them behind.

monkaiju•57m ago
Its honestly a phenomenal time to be a developer that doesnt use AI tooling. Its easier now than ever to differentiate yourself from increasingly knowledge-less devs who can only recite buzzwords but cant actually create, maintain, and improve remotely complex systems.
bloppe•56m ago
It has long been understood that programming is more about reading code than writing code. I don't see any issue with having LLMs write code. The real issue arises when you stop bothering to read all the code that the LLM writes.
lelanthran•22m ago
> The real issue arises when you stop bothering to read all the code that the LLM writes.

Fluency in reading will disappear if you aren't writing enough. And for the pipeline from junior to senior, if the juniors don't write as much as we wrote when young, they are never going to develop the fluency to read.

ramesh31•52m ago
>It's probably fine--unless you care about self-improvement or taking pride in your work.

I did, for a very long time. Then I realized that it's just work, and I'd like to spend my life minimizing the amount of that I do and maximizing the things I do want to do. Code gen has completely changed the equation for workaday folks. Maybe that will make us obsolete, and fall out of practice. But I tend to think the best software engineers are the laziest ones who don't try to be clever. Maybe not the best programmers per se, but I know whose codebase I'd rather inherit.

outside2344•51m ago
Anyone want to wade in and claim CodeSense made us worse developers too?
macinjosh•49m ago
Wow folks really aren’t getting that your perfectly idiomatic, well formatted, agonized over code isn’t needed any more.

People care if their software works. They don’t care how beautiful the code is.

AI can churn out 25 drafts faster than 99% of devs can get their boilerplate setup for the first time.

The new skill is fitting all that output into deployable code, which if you are experienced in shipping software is not hard to get the model to do.

QuadrupleA•48m ago
As a 25+ year veteran programmer that's been mostly unimpressed with the quality of AI-generated code -

I've still learned from it. Just read each line it generates carefully. Read the API references of unfamiliar functions or language features it uses. You'll learn things.

You'll also see a lot of stupidity, overcomplication, outdated or incorrect APIs calls, etc.

visarga•44m ago
Coding agents are going to become better and used everywhere, why train for the artisanal coding style of 2010 when you are closer to 2030? What you need to know is how to break complex projects in small parts, improve testing, organize work and the typical agent problems and capabilities. In the future no employer is going to have the patience for you to code manually.
xp84•37m ago
Maybe everyone is using these agentic tools super heavily, and it’s way different for them, but I just use AI to do all the boring stuff, then I read it and tweak it. It just accelerates my process by 2-5x since I don’t have to implement boring and tedious things like reading or writing a csv file, so I can spend all my coding time on the actually important parts, the novel parts.

I don’t commit 1,000 lines that i don’t know how it works.

If people are just not coding anymore and trusting AI to do everything, i agree, they’re going to hit a wall hard once the complexity of their non-architected Frankenstein project hits a certain level. And they’ll be paying for a ton of tokens to spin the AI’s wheels trying to fix it.