That's not a career-switching issue, that's a company-switching issue. Most people will work for at least one company in their career where the people in charge are dickheads. If you can't work around them, go find a different company to work for. You don't have to throw away an entire career because of one asshole boss.
Also fwiw, resistance is more effective than you think. You'd be surprised how often a dickhead in charge is either A) easy to call the bluff of, or B) needs someone to show them they are wrong. If you feel like you're going to quit anyway, put your foot down and take a stand.
Really? This sounds absurd. "Instead of" means it doesn't matter how shit your work is as long as you're burning tokens? Or it doesn't matter how good your work is if you're not burning tokens? Name and shame
I heard a rumor recently that AWS are doing this, and managers are evaluated based on what percentage of their direct reports used an LLM (an Amazon-approved model) at least once over a given time period.
I guess it's great for AI companies that they've managed to bait and switch "this will improve your productivity" to "this is how much time you're sinking into this, let's not care about if that was useful"
I'm pretty sure Cursor also has something similar?
[ ] Yes
[ ] Maybe later
If I'm familiar with something (or have been) but not done it in a while, 1 - 2 line autocomplete saves so much time doing little syntax and reference lookups. Same if I'm at that stage of learning a language or framework where I get the high level concepts, principals, usescases and such, but I just haven't learned all the keywords and syntax structures fluently yet. In those situations, speedy 1 - 2 line AI autocomplete probably doubles the amount of code I output.
Agents is how you get the problems discussed in this thread: code that looks okay on the surface, but falls apart on deeper review, whereas 1 - 2 line autocomplete forces every other line or 2 to be intentional.
For those on VS, this is how to hide it, if using 17.14 or later,
https://learn.microsoft.com/en-us/visualstudio/ide/copilot-n...
I'm making a comment precisely because it's not obvious when reading the code, and the AI will make up some generic and completely wrong reason.
It is like having an obnoxious co-worker shoving me to the side everytime i type a new line and complete a whole block of code and asking me if it is good without regards to how many times I rejected those changes.
I still use AI, but favor a copy paste flow where I at least need to look at what i am copying and locating the code I am pasting to. At least i am aware of the methods or function names and general code organization.
I also ask for small copy paste changes so that I keep it digestible. A bonus point is that ChatGPT in firefox when the context gets too big, the browser basically slowsdown locks and it works as a form extra sense that the context window is too big and LLM is about to start saying non-sense.
That said AI, is an amazing tool for prototyping and help when out of my domain of expertise.
Write a comment first on what you intend to do, then the AI generally does a good job auto-completing below it. I mean you don't have to "sketch everything out", but just that the AI is using the page as context and the comment just helps disambiguate what you want to do and it can autocomplete significant portions when you give it a nudge with the comment.
I've almost fully converted to agentic coding, but when I was using earlier tools, this was an extremely simple method to get completions to speed you up instead of slow you down.
Every time Visual Studio updates, it’ll turn back on the thing that shoves a ludicrously wrong, won’t even compile, not what I was in the middle of doing line of garbage code in front of my cursor, ready to autocomplete in and waste my time deleting if I touch the wrong key.
This is the thing that Microsoft thinks is important enough to be worth burning goodwill by re-enabling every few weeks, so I’m left to conclude that this is the state of the art.
Thus far I haven’t been impressed enough to make it five lines of typing before having to stop what I’m doing and google how to turn it off again.
The autocomplete, I find it useful. Specially doing menial, very automatic stuff like moving stuff when I refactor long methods. Even the suggestions of comments looks useful. However, the frequency with it jumps it's annoying. It needs to be dialed down somehow (I can only disable it). Plus, it eats the allowed autocomplete quota very quickly.
The "agent" chat. It's like tossing a coin. I find very useful when I need to write a tests for a class that don't have. At least, allows me to avoid writing the boiler player. But usually, I need to fix the mocking setup. Another case when it worked fine, it's when helped me to fix a warning that I had on a few VUE2 components. However, in other instances, I saw miserable falling to write useful code or messing very bad with the code. Our source code is in ISO8859-1 (I asked many times to migrate it to UTF8), and for some reason, sometimes Copilot agent messes the encoding and I need to manually fix all the mess.
So... The agent/chat mode, I think that could be useful, if you know in what cases it would do it ok. The autocomplete is very useful, but needs to be dialed down.
Actual code/projects? Detrimental
[1] E.g. I spent an evening on this: https://github.com/dmitriid/mop
like ... you expect people to actually be committed to "the value of a hard day's work" for its own sake? when owners aren't committed to value of a hard day's worker? and you think that your position is the respectable/wise one? lol
Back in the days of SVN, I'd have to deal with people who committed syntax errors, broken unit tests, and other things that either worked but were obviously broken, or just flat out didn't work.
Taking a bit of pride in your work is as much for your coworkers as it is for yourself. Not everything needs to be some silly proles vs bourge screed.
And are you assuming the alternative involves not clocking out? Because "clock out, finish when there's more time" is a very good option in many situations.
There's a very large number of cases where that's the right choice for the business.
No opinion on whether or not this applies to the current moment. But maybe someone should try forcing Dvorak layout on everyone or something like that for a competitive edge!
I would guess that interest, passion, and motivation all play a role here. It's kind of like programming itself. If you sit people down and make them program for awhile, some will get good at it and some won't.
And, to use less pointed language, people’s brains are wired differently. What works for one doesn’t necessarily work for another, even with similar interest, passion, and motivation.
I was using emacs for a while, but when I switched to vim, something about the different modes just really meshed with how I thought about what I was doing, and I enjoyed it way more and stuck to it for a couple of decades.
I see people that I'd say are more proficient with their emacs, VS Code, etc setups than I am with my vim setup, so I don't think there's anything special about vim other than "it works for me".
I'd rather learn almost any other of the myriad of topics related with software development that the quirks of an opinionated editor. I especially hate memorising shortcuts and commands.
And that time when I changed vim to a symlink to emacs on a shared login server and sat back and enjoyed the carnage. (I did change it back relatively quickly)
GPT-5: Typesetting and paste-up, film prepress/stripping, CMYK color separations, halftone screening, darkroom compositing/masking, airbrush photo retouching, optical film compositing/titling, photochemical color timing, architectural hand drafting, cartographic scribing and map lettering, music engraving, comic book lettering, fashion pattern grading and marker making, embroidery digitizing and stitching, screen-print color separations
Are you aware that there are people that think that even now AI can do everything you describe?
The reason crappy software has existed since...ever is because people are notoriously bad at thinking, planning and architecting systems.
When someone does a "smart decision", it often translates to the nightmare of someone else 5 or 10 years down the line. Most people shouldn't be making "smart decisions"; they should be making boring decisions, as most software is actually a glorified crud. There are exceptions, obviously, but don't think you're special - your code also sucks and your design is crap :) the goal is often to be less sucky and less crappier than one would expect; in the end, its all ones and zeros, and the fancy abstractions exist to dumb down the ones and zeros to concepts humans can grasp.
A machine can and will, obviously, produce better results and better reasoning than an average solution designer; it can consider a multitude of options a single person seldom can; it can point out from the get-go shortcomings and domain-specific pitfalls a human wouldnt even think of in most cases.
So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.
For people who are so confident (which, I'm not), it's an obvious step; developers who don't want to use it must either be luddites or afraid it'll take their jobs. Moving sales people to digital CRMs from paper files, moving accountants to accounting software from paper ledgers and journals, moving weavers to power looms, etc etc -- there would have been enthusiasts and holdouts at every step.
The PE-bro who's currently boasting to his friends that all code at a portfolio has to be written first with Claude Code and developers are just there to catch the very rare error would have been boasting to his friends about replacing his whole development team with a team that cost 1/10 the price in Noida.
Coding agents can't replace developers _right now_, and it's unclear whether scaling the current approach will allow them to at any point, but at some point (and maybe that's not until we get true AGI) they will be able to replace a substantial chunk of the developer workforce, but a significant chunk of developers will be highly resistant to it. The people you're complaining about are simply too early.
And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.
When you dig down into it, there's usually some insane luxury that they're completely unwilling to give up on.
If you're a software engineer in the United States, or in London, you can almost certainly FIRE.
Absolutely not enough to retire early but easily enough to not live paycheck to paycheck. Making 6 figures in the USA and not being able to afford life is so cryptic to me.
I'd say that there's some room for nuance there. Tech hiring has slowed significantly, such that even people in senior roles who get laid off may be looking for a long time.
If you work for Microsoft you're not getting top tier comp already (at least as compared with many other tech companies), and then on top of that you're required to work out of a V/HCOL city. Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck who weren't having that issue a couple of years ago.
Check the prices in Seattle, SF, LA, DC, and NYC metro areas for 2-4 bedroom rentals and how they've jumped the last few years. You're looking at 35%-45% of their take home pay just on rent even before utilities. I'm not sure the math works out all that well for people trying to support a family, even with both parents working.
If you maxed out your lifestyle relative to your income then yes, that is the case. It will always be, no matter how much you make.
It's also the case for the guy stocking the shelves at your local Walmart if he maxes out his lifestyle. But if you compare both in absolute terms, there are huge differences.
Which lifestyle you have is your choice. How big of a house, what car, where to eat, hobbies, clothes, how many kids, etc. If you max that out, fine, enjoy it. But own that it was your choice and comes with consequences, i.e., if expenses rise more than income, then suddenly your personal economy is stretched. And that's on you.
Usually over here we don't dream of making it big with big villas and a Ferrari on the garage, we work to live, not live to work.
Google Gwangju.
Let's work 90 hours a week and retire at 80, imagine the growth, big numbers get bigger makes bald monkey happy
that is all you heard in the 80-90s, people over the pond showing off how many hours per week they worked. like... how is that something to be proud of? So wauw, you spend 12hrs+ per day working , had no free evenings, zero paid holidays. And that is supposed to impress who?
please.
Personally I want my MSFT position to increase, so I’m cool with whatever the company does to increase the share price.
Or perhaps that's the problem, lacking it.
People hate learning new tools, even if they are more efficient. People would rather avoid doing things than learning a tool to do it efficiently.
Even in this thread you can see simeone who is / was a Vim holdout. But the improvement from Vim to IDE will be a fraction of the difference compared to AI integrated IDEs.
Saying that the people are the problem instead of the tool is a lazy argument IMO. "Its not the companies fault, its the customer"
I know there are people still using PHP 5 and deploying via FTP, but most people moved on to be better professionals and use better tools. Many people are doing this to AI, too, me included.
The problem is that some big companies and influential people treat AI as a silver bullet and convince investors and customers to think the same way. These people aren't thinking about how much AI can help people be productive. They are just thinking about how much revenue it can give until the bubble pops.
Today you have "frontend programmers" that couldn't implement a simple algorithm even if their life depended on it; thats not necessarily bad - it democratizes access to tech and lowers the entry bar. These devs up in arms against ai tools are just gatekeepers - they see how easy is to produce slop and feel threatened by it. AI is a tool; in most cases will improve the speed and quality of your work; in some cases, it wont. Just like everything else.
Actually, yes; People forced React (instead of homegrown or different options) because its easier to hire to, than finding js/typescript gurus to build your own stuff.
People forced cloud infrastructure; even today, if your 10-customer startup isn't using cloud at some capacity and/or kubernetes, investors will frown on you; devops will look at you weird (what? Needing to understand inner workings of software products to properly configure them?)
Microservices? Check. 5 years ago, you wouldn't even be hired if you skipped microservices; everyone thinks they're gooogle, and many startups need to burn those aws credits; thats how you get a dozen-machine cluster to run a solution a proper dev would code in a week and could run on a laptop.
I've worked with people using vim who wildly outproduce full teams using IDEs, and I have a strong suspicion that forcing the vim person to use an IDE would lower their productivity, and vice versa
This is not due to the editor. Vim is not a 20x productivity enhancer.
>forcing the vim person to use an IDE would lower their productivity
Temporarily, sure. But there productivity should actually go up after they are used to it. This idea of wanting to avoid such a setback and avoiding change is what keeps people on such an outdated workflow.
Useful tools, but I think the idea that they'll replace programmers is (wishful? eek) thinking.
Mass produced clothing exists in many industrialized countries - typically the premium stuff; the sweatshop stuff is quite cheaper, and customers are happy paying less; its not capitalism, its consumer greed. But nice story.
It’s a bit like returning to the office. If it’s such an obvious no-brainer performance booster with improved communication and collaboration, they wouldn’t have to force people to do it. Teams would chomp at the bit to do it to boost their own performance.
And even if it was, that's also assuming this benefit would be superior to the benefit of remote work for the individual.
Similarly, many people don't like learning new tools, and don't like changing their behavior. Especially if it's something they enjoy vs something good for the business. It's 2025 and people will have adamantly used vim for 25 years; some people aren't likely to change what they're comfortable with. Regardless of what is good for productivity (which vim may or may not be), developers are picky about their tools, and its hard to convince people to try new things.
I think the assumption that people will choose to boost their own productivity is questionable, especially in the face of their own comfort or enjoyment, and if "the business" must wait for them to explore and discover it on their own time, they risk forgoing profits associated with that employee's work.
Your argument also hinges on "business" knowing what is good for productivity, which they generally don't. Admittedly, neither do many programmers, else we'd have a lot less k8s.
With LLMs, I'm not so sure. Seems more like an individual activity to me. Are some people resistant to new tools, sure. But a good tool does tend to diffuse naturally. I think LLMs are diffusing naturally too, but maybe not as fast as the AI-boosters would like.
The mistake these managers are making is assuming it's a good tool for work that they're not qualified to assess.
If the AI tools actually worked how they are marketed I’d use them because that’s less work for me to have to do. But they don’t.
Even when I already hear from them that "it helps them in language they do not know" (which is also my experience) I get frown upon if on meetings I do not say that I am "Actively using AI to GENERATE whole files of code".
I use AI as rubber duck, generate repetitive code or support me when going into an new language or technology, but as soon as I understand it, most of the code given for complete, non hobby, enterprise level projects contains either inefficient code or just plain mistakes which takes me ages to fix for new technologies.
Metrics we understand, but that managers miss to understand sometimes. You are a means to produce. With the advent of AI, some very hyped people think and wish they could get rid of programmers.
You know what I am doing in the meantime? I built a business. I am just finishing the beta deploymet test now. It can go wrong? Yes.
But otherwise, be faced to be a number, a production chain thing in the future. Besides that, when they can get rid of you, you are going to be in a bad positio to move at that time. Invest time now in an alternative strategy, if you can.
Of course, I know nothing about you so I might be totally wrong. If you already have financial safety for the rest of your life, this does not apply as hard.
I am trying to buy more freedom on my side. I already had some, but not enough. You will not be free with a manager to report to, even if you are thinking you are doing a better job than he thinks. Or even if you are objetively doing it.
They will care about delivery in a rush, politics, self-interest (this is not different from any human, but you will depend on them), etc.
Just choose freedom :D
Why are programs - the result of the ingenuity of people working in software field - not protected against AI slop stuff.
Why is there not any kind of narrative out there describing how fake and soulless is code written by any AI agent?
Programmers are by and large not assholes adverse to sharing, which is why we have copyleft and stack overflow..
Coding is also a process, a process that you may need to go through many times. The creation and maintenance of expert systems
Artists tend to want to win it big once, never innovate, and use the government to force people to send them money.
Programs on the other hand still need developers to make. Also, we've seen decades of tooling evolution that (1) made developers more productive (2) failed to replace developers.
But the "rubber-stamp" framing is wrong, if it were true then you would not be needed at all. It's actually harder to use gen AI than to code manually. Gen AI has a rapid pace and overwhelming quantity of code you need to ensure is not broken in non-obvious ways. You need to layer constraints, tests, feedback systems for self repair and handle memories across contexts.
I recently vibe coded 100K LOC across dozens of apps, I feel the rush of power in coding agents but also the danger. At any moment they could hallucinate, misunderstand or use a different premise than you did. Going past 1000 LOC requires sustained focus, it will quickly unravel into a mess otherwise.
Feels like you are assuming everyone has your diligence and the diligence that exists in the industry isn't already rapidly decaying due to what's happening.
The better the code generated by LLMs get, the less there is of an incentive to say "no". Granted, we're not nearly there yet (even though media reports and zealous tech bros say otherwise). But - and this is especially true for organizations that already had a big code quality problem before the LLMs showed up - if the interpreter / compiler accepts the code and it superficially looks like it does what it should, there is pressure to simply accept it.
Why say no when we could be done now and move on!? Rubber-stamp it and let's go! Sigh. Maybe I'm overly pessimistic, reading the raves about LLMs every day grinds me down.
So yes it does increase "velocity" for the person A who can get away with using it. But then the decrease in velocity for person B trying to build on top of that code is never properly tracked. It's like a game of hot potato, if you want to game the metrics you better be the one working on greenfield code (although I suppose maintenance work has never been looked at favorably in performance review; but now the cycle of code rot is accelerated)
The same is true for many people submitting PRs to OSS. They don't care about making real contributions, they just want to put something on their resume.
AI is probably making it more common, but it really isn't a new issue, and is not directly related to LLMs.
Turns out sometimes the next guy who has to do maintenance is oneself.
Over the years I've been well-served by putting lots of comments into tickets like "here's the SQL query I used to check for X" or "an easy local repro of this bug is to disable Y", etc.
It may not always be useful to others... but Future Me tends to be glad of it when a similar issue pops up months later.
After it becomes second-nature is really relaxing to know I have left all the context I could muster around, comments in tickets, comments in the code referencing a decision, well-written commit messages for anything a little non-trivial. I learnt that peppering all the "whys" around is just being a good citizen in the codebase, even if only for Future Me.
It doesn’t really get its own anything, as it is unable to "get". It's just a probabilistic machine spitting out the next token
Shiny new stuff quickly produced, manager smiles and pays, contractor disappears, heaven help the poor staffers who have to maintain it.
It's not new, just in a new form.
(I also find the people who simply paste LLM output to you in chat are the much bigger evil)
They kept repeatedly getting an NC-17 from the MPAA and kept on resubmitting it (6 times) until just before release when they just relented, gave it an R and released it as-is.
https://en.wikipedia.org/wiki/South_Park:_Bigger,_Longer_%26...
Of course, golden rules are 1. write the tests yourself, don't let the LLM write them for you and 2. don't paste this code directly on the LLM prompt and let it generate code for you.
In the end it boils down to specification: the prompt captures the loosely-defined specification of what you want, LLM spouts something already very similar to what you want, tweak it, test it, off you go.
With test driven development this process can be made simpler, and other changes in other parts of the code are also checked.
That's where we're at right now anyways.
"If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping--"
And that's how we got here.
The code rot issue will blow up a lot more over the next few years, that we can finally complete the sentence and start "shipping competing code that works".
I worry that mopping up this catastrophe is going to be a task that people will again blindly set AI upon without the deep knowledge of what exactly to do, rather than "to do in general, over there, behind that hill".
A lot of non-technical people are going to get surprisingly far into their product without realising they are on a bad path.
It already happens now when a non-technical founder doesn't get a good technical hire.
The surprising thing for developers though, is how often a shit codebase makes millions of dollars before becoming an issue. As much as I love producing rock solid software, I too would take millions of dollars and a shit codebase over a salary and good code.
Unfortunately a lot of people are in that situation. You can basically forget about disruption. Meritocracy is dead, long live the Peter principle.
I'm also reminded of that legendary old IBM quote from 1979: "A computer can never be held accountable. Therefore a computer must never make a management decision."
In a larger scope, I tend to break many "rules" when I code, because I say that my experience proves against it, and this is what makes me unique. Of course nowadays, I need to convince my team to approve it, but sometimes things that are written differently are free from certain flaws that I want in this very case to avoid.
-- EDIT --
I think that this management trend comes from the bad management principles. There's a joke that a bad manager is a person who knowingly that one woman delivers a baby in 9 months, will consider that nine women deliver a baby in one month. I'd say similar principle comes in here - they were bought by the commercials on how AI makes things faster, they have put the numbers into their spreadsheet and now they expect the numbers they pay get similar to the numbers on the sheet. And if the numbers do not fit, they start pushing.
It sounds anti-LLM, but it actually helps support the illusion that LLMs can do more than they actually can.
I don't think an LLM can write serious software on its own. If it could, there would be some extraordinary evidence, but all there is are some people spreading rumours. If you ask them for simple evidence of comparable performance (like a video), they shy away or answer vaguely.
The thing is not there yet, and I understand the optimism of some, but I also must emphasize that it's not looking great for LLM coding enthusiasts right now. There's no amount of proselitism that can make up for the lack of substance in their claims. Maybe they can trick investors and some kids, but that's not going to cut it in the long run.
Therefore, this is not a problem. I don't need to worry about it. If (or when) some evidence appears, I can then worry about it. This hasn't happened yet.
Current models are the embryos of what is to come.
Code quality of the current models is not replacing skilled software engineers, network or ops engineers.
Tomorrows models may well do that though.
Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.
Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.
We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.
All while todays tech talent spends energy bickering on HN about the loss of being the code review King.
Everyone hated the code review royalty anyway. No one mourns them. Move on.
GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).
Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.
Not enough new data, new data that is LLM generated (causing a "recompressed JPEG" sort of problem), absurd compute requirements for training that are only getting more expensive. At some point you hit hard physical limits like electricity usage.
[1]: If this happens, one side effect is that local models will be more than good enough. Which in turn means all these AI companies will go under because the economics don't add up. Fun times ahead, whichever direction it goes.
The marketing around AI as a feature complete tool ready for production is disingenuous at best, and outright fraud in many cases.
In scenarios were especially the later might not be true it seems like a inevitable failure. And I am not even sure any fixes will be thought trough either... Which makes me rather sceptical of whole thing.
Its exhausting, infuriating, and a waste of time.
My experience is that using AI as a fancy code completion tool works very well and saves me a lot of time.
But, trying to let it define how to do things aka vibe coding, is a recipe for endless disaster.
AI coder can do great things but it needs someone to first define the architecture and forcefully guide it in the right direction at every step. If let loose, things go haywire.
I generally find the whole process to be more frustrating and time consuming than just writing the code myself.
I am not interested in entire new architectural paradigms required to enable a mediocre code ad-lib bot.
I thought we were all 'full stack engineeers' now, otherwise the resume got thrown into the circular file?
Great. I wait with anticipation for the slide back to 'Calculator'.
Today, no commercial pilot would get the idea that they are there to fly straight for eight hours. They are there for when bad things happen.
I expect software development to go into a similar direction.
No, for real, LLM solutions costs a shitload of money, and every investment needs to be justified on a management level. That's the reason they are enforcing it.
My bigger problem is that there are a whole lot of "developers" who do not read the generated code properly, why do you end up in review sessions where the developer does not know what is happening and why the code acts in a particular way. And we have not yet discussed clean code principles throughout the whole solution...
Cheer2171•2h ago
Or clones a template repo and only tweaks a few files
Or imports libraries with code I've never read
zepolen•2h ago
Programmers wrote the StackOverflow answer and wrote that library.
danielbln•2h ago
BiteCode_dev•2h ago
But according to your definition, I'm a script kiddy.
Copenjin•2h ago
- Check stackoverflow only for very niche issues, never finds what he needs but reaches a solution reading multiple answers and sometimes used to post a better solution for his issue
- Have his own templates if he does repetitive and boring stuff (common), implements the complex logic if any first and get the rest done as fast as possible being mildly disgusted.
- Imports libraries and often take a look at the code noticing stuff that could be improved. Has private forks of some popular opensource libraries that fix issues or improve performance fixing silly errors upstream. Sometimes he is allowed/has time to send the fixes back upstream. When using those libraries sometimes he finds bugs, and the first thing he does is checking the code and try to fix them directly, no tickets to the maintainers, often opens a PR with the fix directly.