Pick anything else you have a far better likelihood to fall back into manual process, legal wall, or whatever that AI cannot replace easily.
Good job boys and girls. You will be remembered.
Prompting an AI just doesn't have the same feeling, unfortunately.
It brings the “what to build“ question front and center while “how to build it“ has become much, much easier and more productive
Same thing for science. I don't mind if AI could solve all those problems, as long as they can teach me. Those problems are already "solved" by the universe anyway.
There's so much half-working AI-generated code everywhere that I'd feel ashamed if I had to ever meet our customers.
I think the thing that gives me the most value is code review. So basically I first review my code myself, then have Claude review it and then submit for someone else to approve.
Maybe it's just because my side projects are fairly elementary.
And I agree that AI is pretty good at code review, especially if the code contains complex business logic.
The document is human-crafted and human-reviewed, and it primarily targets humans. The fact that it works for machines is a (pretty neat) secondary effect, but not really the point. And the document sped up the act of doing the refactors by around 5x.
The whole process was really fun! It's not really vibe coding at that point, really (I continue to be relatively unimpressed at vibe coding beyond a few hundred lines of code). It's closer to old-school waterfall-style development, though with much quicker iteration cycles.
The commonality of people working on AI is that they ALL know software. They make a product that solves the thing that they know how to solve best.
If all lawyers knew how to write code, we’d seem more legal AI startups. But lawyers and coders are not a common overlap, surely nowhere as near as SWEs and coders.
Good job AI fanboys and girls. You will be remembered when this fake hype is over.
I don't really see why anywhere near the number of great jobs this industry has had will be justifiable in a year. The only comfort is all the other industries will be facing the same issue so accomodations will have to be made.
Damn it that I’m only 40+ so I still need to work more or less 15 years even when we live frugally.
What that means for society where there are extremely rich people who owns resources and capital, and everyone else is only valued for their dexterity and physical labor (vs skills) I can only guess.
I do think the AI labs have potentially unleashed a society changing technology that ironically penalizes meritocracy and/or intelligence by making it less scarce. The jobs left will be the ones people avoided for a reason (health, risk, etc)
> do not know what's coming for us in the next 2-3 years, hell, even next year might be the final turning point already.
What is this based on? Research? Data? Gut feeling?
> but how long will it be until even that is not needed anymore?
You just answered that. 2 to 3 years, hell, even next year, maybe.
> it also saddens me knowing where all of this is heading.
If you know where this is heading why are you not investing everything you have in these companies? Isn't that the obvious conclusion instead of wringing your hands over the loss of a coding job?
It invents a problem, provides a time line, immediately questions itself, and then confidently prognosticates without any effort to explain the information used to arrive at this conclusion.
What am I supposed to take from this? Other than that people are generally irrational when contemplating the future?
Because unlike previously:
- You can't invest in these things directly (mostly private) so gains are at best diluted for retail investors.
- They can take your job AND still be unprofitable (i.e. on VC money/subsidized).
- Value accures to capital/companies using it potentially, not the AI labs themselves in a competitive market. In which case the gains will be across many industries and be diluted (i.e. not life changing if you invest enough to offset income loss)
Combined with the fact that many are reliant on their income to pay the bills and don't have enough capital to invest in these things and yes: - They are exposed to the loss of income of their labor.
- They don't have the capital and/or risk tolerance to invest accordingly.
- The way to invest in these isn't obvious and is subject to unsystematic risk (i.e. can you pick the winners?).I'm honestly not complaining about the model releases, though. Despite their shortcomings, they are extremely useful. I've found Gemini 3 to be an extremely useful learning aid, so as long as I don't blindly trust its output, and if you're trying to learn, you really ought not do that anyways. (Despite what people and benchmarks say, I've already caught some random hallucinations, it still feels like you're likely to run into hallucinations on a regular basis. Not a huge problem, but, you know.)
https://www.reddit.com/r/ClaudeAI/comments/1pe6q11/deep_down...
https://www.reddit.com/r/ClaudeAI/comments/1pb57bm/im_honest...
https://www.reddit.com/r/ChatGPT/comments/1pm7zm4/ai_cant_ev...
https://www.reddit.com/r/ArtificialInteligence/comments/1plj...
https://www.reddit.com/r/ArtificialInteligence/comments/1pft...
https://www.reddit.com/r/AI_Agents/comments/1pb6pjz/im_hones...
https://www.reddit.com/r/ExperiencedDevs/comments/1phktji/ai...
https://www.reddit.com/r/csMajors/comments/1pk2f7b/ (cached title: Your CS degree is worthless. Switch over. Now.)
"The overwhelming consensus in this thread is that OP's fear is justified and Opus represents a terrifying leap in capability. The discussion isn't about if disruption is coming, but how severe it will be and who will survive."
My fellow Romans, I come here not to discuss disruption, but to survive!
Right: if you expect your job as a software developer to be effectively the same shape on a year or two you're in for a bad time.
But humans can adapt! Your goal should be to evolve with the tools that are available. In a couple of years time you should be able to produce significantly more, better code, solving more ambitious profiles and making you more valuable as a software professional.
That's how careers have always progressed: I'm a better, faster developer today than I was two years ago.
I'll worry for my career when I meet a company that has a software roadmap that they can feasibly complete.
In threads where I see an example of what the author is impressed by, I'm usually not impressed. So when I see something like this, where the author doesn't give any examples, I also assume Claude did something unimpressive.
If I was only writing code, the fear would be completely justified.
1) it’s not impartial
2) it’s useless hype commentary
3) it’s literally astroturfing at this point
Otherwise, with all due respect, there's very little of value to learn in that subreddit.
Something doesn't square about this picture: either this is the best thing since sliced bread and it should be wildly profitable, or ... it's not, and it's losing a lot of money because they know there isn't a market at a breakeven price.
They have several billion dollars of annual revenue already.
If OpenAI is only going to be profitable (aka has an actual business model) if other companies aren't training a competitive model, then they are toast. Which is my point. They are toast.
In principle, I mean. Obviously there's a sense in which it doesn't matter if they only get fined for cross-subsidising/predatory pricing/whatever *after* OpenAI et al run out of money.
I do think this is a bubble and I do expect most or all the players to fail, but that's because I think they're in an all-pay auction and may be incentivised to keep spending way past the break-even point just for a chance to cut their losses.
But as a gut-check, even if all the people not complaining about it are getting use out of any given model, does this justify the ongoing cost of training new models?
If you could delete the ongoing training costs of new models from all the model providers, all of them look a lot healthier.
I guess I have a question about your earlier comment:
> Google is always going to be training a new model and are doing so while profitable.
While Google is profitable, or while the training of new models is profitable?
That’s a reason why I can’t believe the benchmarks and why I also believe open source models (claiming 200 but realistically struggling past 40k) aren’t only a bit but very far behind SOTA in actual software dev.
This is not true for all software, but there are types of systems or environments where it’s abundantly clear that Opus (or anything with a sub 1m window) won’t cut it, unless it has a very efficient agentic system to help.
I’m not talking about dumping an entire code base in the context, I’m talking about clear specs, some code, library guidelines, and a few elements to allow the LLM to be better than a glorified autocomplete that lives in an electron fork.
Sonnet still wins easily.
qwen3-coder blew me away.
> Taking longer than usual. Trying again shortly (attempt 1 of 10)
> ...
> Taking longer than usual. Trying again shortly (attempt 10 of 10)
> Due to unexpected capacity constraints, Claude is unable to respond to your message. Please try again soon.
I guess I'll have to wait until later to feel the fear...
It's definitely more useful than me the first 5 years of my professional career though, so for people who don't improve fast or for average new grades, this can be a problem.
techblueberry•1mo ago
krackers•1mo ago
Forgeties79•1mo ago
pogue•1mo ago
Crypto was just that, a pure grift where they were creating something out of nothing and rugpulling when the hype was highest.
AI is actually creating something, it's generating replacement for artists, for creatives, for musicians, for writers, for programmers. It's literally capable of generating something from _almost_ nothing. Of course, you have to factor in energy usage & etc, but the end user sees none of that. They type a request and it generates an output.
It may be easily identifable slop today, but it's getting better and better at a RAPID rate. We all need to recognize this.
I don't know what to do with the knowledge that it's coming for our jobs. Adapt or die? I don't know...
krackers•1mo ago
The common thread is that there's no nuanced discussion to be found, technical or otherwise. It's topics optimized for viral engagement.
pogue•1mo ago
I see what you're saying, that's a bit of a different aspect entirely. I don't know how much people are making from viral posts on Twitter (or fb?) from that kind of thing.
But, outside of those specific platforms, there's quite a bit of discussion on it on reddit and on here has had some of the best. The good tech sites like Ars, Verge, Wired, Register all have excellent realistic coverage of what's going on.
I think if you're only seeing hype I'd ask where you're looking. And on the flip side, there's the very anti-ai crowd who I'm sure might be getting that same kind of reach to their target audience preaching the evils & immortality of it.
techblueberry•1mo ago
pogue•1mo ago