When GPT3 was opened to researchers 4-5 years ago, a friend of mine had access and we tried some stuff together; I was blown away that it could translate code it hasn't seen between programming languages, but it seemed to be pretty bad at it at the time. I did not expect coding to be the killer app of LLMs but here we are.
FYI this is the denial stage.
Reaching this epiphany is a major milestone in the career of an SE even before the days of LLMs. That's basically the crux of it.
I'd guess that only 10% of them actually do. In order to have those skills, you need good user sense, good business sense, good negotiation skills, good communication skills. These skills align more with the product manager to be frank.
Of course, the best people are still going to be those who have the technical chops and business sense. They'll be amplified more in this era.
I've said before: "There are no 'staff' projects, only 'staff' execution."
aurareturn•1h ago
I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.
Edit: Based on comments here, it seems like HN is still mostly at the anger stage.
rootnod3•1h ago
Not even starting with how it just “fixes” a hug by introducing a wholly new one and then re-introducing the old one when pointing it out.
aurareturn•1h ago
Or maybe the LLM just hasn't been trained enough on the language you're using.
visarga•1h ago
anonymous908213•1h ago
"Undeniably"? I will deny that they are good. I try to use LLMs on a near-daily basis and find them unbearably frustrating to use. They cannot even adequately complete instructions like "following the pattern of A, B, and C in the existing code, create X Y and Z functions but with one change" reliably. This is a given; the work I do is outside the training dataset in any meaningful sense, so their next-token-prediction is statistically going to lean away from predicting whatever I'm doing, even if RL training to "follow instructions" is marginally effective.
The conclusion I've come to is that the 10x hypebots fall into two categories. The first is hobbyists who could barely code at all, and now they are 10x productive at producing very bad software that is not worth sharing with the world. The other category is people who use LLMs to launder code from the training dataset to wash it free of its licenses. If your use case is reproducing code it has already been trained on, it can do that quickly.
These claims of "holding it wrong", one of which I already see in the replies, are fundamentally preposterous. This is the revolution that is democraticising software engineering for anyone who can write natural language, yet competent software engineers are using it wrong? No, the reality is that it simply doesn't have that level of utility. If it did, we would be seeing an influx of excellent software worthy of widespread usage that would replace much of the existing flawed software in the world, if not pushing new boundaries altogether. Instead we get flooded with ShowHNs fit for the pig trough.
That's not to say LLMs have zero utility. They can obviously generate a proof-of-concept quickly, and if the task is trivial enough, save a couple of minutes writing a throwaway script that you actually use day-to-day. I find them to be somewhat useful for retrieving information from documentation, although some of this gain is offset by the time wasted from hallucinated APIs. But I would estimate the productivity gains at 5%, maybe. That gain is hardly worth the accelerating AI psychosis gripping society and flooding the internet with garbage that drowns out the worthwhile content.
Addendum: Now that your post has been rewritten to assert that no, LLMs aren't there yet, but surely in the next 6 months, this time for sure it'll be AGI... welcome to the bubble. I've been told that AGI is coming in a couple of months every month for the past two years. We are no closer to it than we were two years ago. The improvements have been modest and there are clearly diminishing returns on investing in exponential scaling, not to mention that more scaling can never solve the fundamental architectural flaws of LLMs.
aurareturn•1h ago
anonymous908213•1h ago
aurareturn•1h ago
Maybe a Github repo for me to try?
anonymous908213•1h ago
[1] AI psychosis projects like Gas Town, which are only used by other psychosis victims to create more psychosis projects and which altogether in the end never result in a real project that solves a real-world problem for real people do not count.
NitpickLawyer•1h ago
First, there are some "smells" that I noticed. You say that LLMs hallucinate APIs and in another comment (brief skim of your history to make sure it's worth replying) you say something about chatting with an LLM. If you're "using" them in a chat interface, that's already 1+year old tech, and you should know that noone here talks about that. We're talking about LLM assisted coding using harnesses that make it possible and worth your time. Another smell is that you assert that LLMs only work for languages that are popular. While it's true they work best in those cases, as of ~1 y ago, it's also true that they can work even on invented languages. So I take every "i work in this very niche field" with a grain of salt nowadays.
Second, the overall problem with "it doesn't work for me" is that it's an useless signal. Both in general and in particular. If I see a "positive post", I can immediately test it. If it works, great, I can include it in my toolbox. If it doesn't work, I can skip it. But with posts like yours, I can't do anything. You haven't provided any details, and even if you did, it would still be so dependant on your particular problem, with language, env, etc. that it would make the signal very weak for anyone else that doesn't have your particular problem.
I am actually curious, if you can share, what's your setup. And perhaps an example of things you couldn't do. Perhaps we can help.
The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.
Having said that, here's my take: With small provisions made for extreme niche fields (so extreme that it would place you in 0.0x% of coders, making the overall point moot anyway) I think people reporting 0 success are either wrong or using it wrong. It's impossible for me to believe that everything that I can achieve is so out of phase with whatever you are trying to achieve as to you getting literally 0 success. And I'm sick and tired of hearing this "oh it works for trivial tasks". No. It works reliably and unattended mostly for trivial tasks, but it can also work in very advanced niches. And there's plenty of public examples already for this - things like kernel optimisation, tensor libraries, cuda code, and so on. These are not "amateur" topics by any stretch of the word. And no, juniors can't one shot this either. I say this after 25+years doing this: there are plenty of times where I'm dumbstruck by something working first try. And I can't believe I'm the only one.
anonymous908213•34m ago
> The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.
This very thread is about hype. The post I originally replied to suggests that developers are in stages of grief about LLMs. That we are traversing denial, anger, and depression, before our inevitable acceptance. It is utterly tiring to be subjected to this day in, day out, in every avenue of public discourse about the field. Of course I have grievances with the hype. Of course I don't appreciate being told I'm in denial and that everything has changed. The only thing that has changed is that LLM-generated articles are all over HN and ShowHN is polluted with a very high quantity of very low quality content.
> Second, the overall problem with "it doesn't work for me" is that it's an useless signal.
The signal is not for the true believers. People who have not succumbed to the hype may find value in knowing that they are not alone. If one person can't make use of LLMs, while everyone around them is hyping them up, it may make that person feel like they are being doing something wrong and being left behind. But if people push back against the hype, they will know that they are not alone, and that maybe it isn't actually worth investing entire workdays into trying to find the magical configuration of .md files that turns Claude Code from 0.5x productivity to 10x productivity.
To be clear, I'm not really in the market for advice on "holding it right". If I find myself being left behind in reality, I will continue giving the tooling another shot until I get it right. I spend most of my life coding, and have so many large projects I wish to bring into the world and not enough time to do them all; I will relentlessly pursue a productivity increase if and when it becomes available. As it is, though, I have seen zero evidence that I am actually being left behind, and am not currently interested in trying again at the present time.
palmotea•57m ago
We can only hope! It's about time all those pompous developers embrace the economic rug-pull, and adopt a lifestyle more in line with their true economic value. It's capitalism people, the best system there is. Deal with it and quit whining.