Things have improved significantly since then. Copying and pasting code from o1/o3 versus letting codex 5.3 xhigh assemble its own context and do it for you.
It's also not a study of just engineers, it's people across engineering, product, design, research, and operations. For a lot of none-code tasks Ai needs pasted context as it's not usually in a repo like code is.
(And their comments about intensifying engineering workload also aren't really changed by ai copy/paste vs context).
I humbly propose that point is today.
You're right that the argument will become boring, but I think it's gonna be a minute before it does so. I spent much of yesterday playing with the new "agent teams" experimental function of Claude Code, and it's pretty remarkable. It one-shotted a rather complex Ansible module (including packaging for release to galaxy), and built a game that teaches stock options learning, both basically one-shotted.
On Thursday I had a FAC with a coworker and he predicted 2026 is going to be the year of acceleration, and based on what I've seen over the last 2-3 years I'd say it's hard to argue that.
In the past, AI coding agents could usually reason about the code well enough that they had a good chance of success, but I’d have to manually test since they were bad at “seeing” the output and characterizing it in a way that allowed them to debug if things went wrong, and they would never ever check visual outputs unless I forced them to (probably because it didn’t work well during RL training).
Opus 4.6 correctly reasoned (on its own, I didn’t even think to prompt this) that it could “test” the output by grabbing the first, middle and last frame, and observing that the first frame should be empty, the middle frame half full of details, and the final frame resembling the input image. That alone wouldn’t have impressed me that much, but it actually found and fixed a bug based on visual observation of a blurry final frame (we hadn’t run the NeRF training for enough iterations).
In a sense this is an incremental improvement in the model’s capabilities. But in terms of what I can now use this model for, it’s huge. Previous models struggled at tokenizing/interpreting images beyond describing the contents in semantic terms, so they couldn’t iterate based on visual feedback when the contents were abstract or broken in an unusual way. The fact that they can do this now means I can set them on tasks like this unaided and have a reasonable probability that they’ll be able to troubleshoot their own issues.
I understand your exhaustion at all the breathless enthusiasm, but every new models radically changes the game for another subset of users/tasks. You’re going to keep hearing that counterargument for a long time, and the worst part is, it’s going to be true even if it’s annoying.
Not just won’t get you there fastest. At all.
This is most likely correct. Everyone talks how AI makes it possible to "do multiple tasks at the same time", but noone seems to care that the cognitive (over)load is very real.
It reminds me a bit of how à while back people were finding that operating a level 3 autonomous vehicle is actually more fatiguing than driving a vehicle that doesn’t even have cruise control.
Why not just have another worktree?
On the bright side, that would address the employability crisis for new grads.
Then "Make a detailed list of changes and reasoning behind it."
Then feed that to another AI and ask: "Does it make sense and why?"
A responsible developer will only produce code as fast as they can sign it off.
An irresponsible one will just shit all over the codebase.
> you do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn’t free you from comprehending, evaluating and managing the work
I’m currently in an EM role and this is my life but with programmers instead of AI agents.
What's stopping you from becoming an IC and producing as much as your full team then? What's the point of having reports in this case?
I am not sure about this statement, aren't we always cutting the corners to make things ~95% correct at scale to meet deadlines with our staffing/constraints?
Most of us, who doesnt work on Linux kernel, space shuttles, and near realtime OSes, we were writing good enough code to meet business requirements
I hated the old world where some boomer-mentality "senior" dev(s) would take days or weeks to deliver ONE fucking thing, and it would still have bugs and issues.
I like the new world where individuals can move fast and ship, and if there are bugs and issues they can be resolved quickly.
The boomer-mentality and other mids get fired which is awesome, and orgs become way leaner.
Just because there are excess of CS majors and programmers doesn't mean we need to make benches that they can keep warm.
Some places have military grade paperwork where mistakes are measured in millions of dollars per min. Others places are 'just push it in fix it later'.
AI is not going to change that. That is a people problem. Not something you can automate away. But you can fire your way out of it.
I've only ever worked at places that are at the bleeding edge and even there we had total slackers.
I've been biking to work occasionally for a few years now and it definitely gets easier.
This mythical class of developer doesn't exist. Are you trying to tell me that there are a class of developers out there that are doing three months worth of work every single day at the office?
Writing code is what I expect a junior or mid level engineer to spend 20% of their time doing. By the time you reach senior engineer it should be less (though when you write code you are faster and so might write more code despite spending less time on it).
I would like to see GitHub's project creation and activity charts from today compared to 5 years ago. Similar trends must be happening behind closed doors as well. Techy managers are happy to be building again. Fresh grads are excited about how productive they can be. Scammers are deploying in record time. Business is boomin'.
It's likely that all this code will do more harm than good to the rest of us, but it's definitely out there.
“ I distinguish four types. There are clever, hardworking, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and hardworking; their place is the General Staff. The next ones are stupid and lazy; they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the mental clarity and strength of nerve necessary for difficult decisions. One must beware of anyone who is both stupid and hardworking; he must not be entrusted with any responsibility because he will always only cause damage.”
— Kurt von Hammerstein-Equord
The industrial revolution lead to gains that allowed for weekends and the elimination of child labor, but they didn't come for free, they had to be fought for.
If we don't fight for it, what are we gaining? more intense work in exchange for what?
It just takes thinking about it for 5 seconds to see the contradiction. If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.
20 years ago SV was stereotyped for "lazy" or fun loving engineers who barely worked but cashed huge pay checks. Now I would say the stereotype is overworked engineers who on the midlevel are making less than 20 back.
I see it across other disciplines too. Everyone I know from sales, to lawyers, etc if they engage with AI its like they get stuck in a loop where the original task is easier but now it revealed 10 more smaller tasks that fill up their time even more so than before AI.
Thats not to say productivity gains with AI aren't found. It just seems like the gains get people into a flywheel of increasing work.
There is a lot of work to do, just because you are doing more work with your time doesn’t mean you can somehow count that as less work.
Are the people leveraging LLMs making more money while working the same number of hours?
Are the people leveraging LLMs working fewer hours while making the same amount of money?
If neither of these are true, then LLMs have not made your life better as a working programmer.
Nobody is getting a raise for using AI. So no.
>Are the people leveraging LLMs working fewer hours while making the same amount of money?
Early adopters maybe, as they offload some work to agents. As AI commodifies and is the baseline, that will invert, especially as companies shed people to have the remaining "multiply" their output with AI.
So the answer will be no and no.
(USSR National Anthem plays) But if you owned the means of production and kept the fruits of your labor, say as a founder or as a sole proprietor side hustle, then it's possible those productivity gains do translate into real time gains on your part.
Neither are the hours worked.
Nor is the money.
Just think of the security guard on site walking around, or someone who has a dozen monitors.
That is, if anyone uses it your life will be worse, but if you don't use it then your life will be even worse than those using it.
Too bad you programmers didn't unionize when you had the chance so you could fight this. Guess you'll have to pull yourself up by your bootstraps.
The software experience is always going to feel about the same speed perceptually, and employers will expect you to work the same amount (or more!)
Throughout human history, we have chosen more work over keeping output stable.
These days, that choice is more viable than ever, as the basic level of living supported by even a few hours a week of minimum wage affords you luxuries unimaginable 50 or 100 years ago.
I can pretty easily do a 12h day of prompting but I haven't been able to code for 12h straight since I was in college.
Additionally, I can eke out 4 hrs really deep diving nowadays, and have structured my workday around that, delegating low-mental-cost tasks to after that initial dive. Now diving is a low enough mental cost that I can do 8-12hrs of it.
It's a bicycle. Truly.
The older I get, the more I see the wisdom in the ancient ideas of reducing desires and being content with what one has.
---
Later Addition:
The article's essential answer is that workers voluntarily embraced (and therefore tolerated) the longer hours because of the novelty of it all. Reading between the lines, this is likely to cause shifts in expectation (and ultimately culture) — just when the novelty wears off and workers realize they have been duped into increasing their work hours and intensity (which will put an end to the voluntary embracing of those longer hours and intensity). And the dreaded result (for the poor company, won't anyone care about it?!) is cognitive overload, hence worker burnout and turnover, and ultimately reduced work quality and higher HR transaction costs. Therefore, TFA counsels, companies should set norms regarding limited use of generative language models (GLMs, so-called "AI").
I find it unlikely that companies will limit GLM use or set reasonable norms: instead they'll crack the whip!
Do you want to though?
It's 2026 for god's sake. I don't want to work __longer__ days, I want to work __shorter__ days.
Prompting has so many distractions and context switches I get sick of it after an hour.
The coding is now assumed "good enough" for me, but the problem definition and context that goes into that code aren't. I'm now able to spend more time on the upstream components of my work (where real, actual, hard thinking happens) while letting the AI figure out the little details.
Obviously, "take a day off" is not the value prop their selling to buyers (company leadership), but they can't be so on the nose in a public commercial that they scare individual contributors.
Heavy machinery replaces shovels. It reduces workload on the shovel holders, However, someone still needs to produce the heavy machinery.
Some of these companies are shovel holders, realizing they need to move up stream. Some of these companies are already upstream, racing to bring a product to market.
The underlying bet for nearly all of these companies is "If I can replace one workflow with AI, I can repeat that with other workflows and dominate"
Isn't it simple?
Because of competition, which is increased because of entry barrier is lowered a lot for building new software products.
You output a lot, so do your competition.
It feels like leadership is putting immense pressure on everyone to prove their investment in AI is worth it and we all feel the pressure to try to show them it is while actually having to work longer hours to do so.
There's a palpable desperation that makes this wave different from mobile or cloud. It's not about making things better so much as its about not being left behind.
I'm not sure of the reason for this shift. It has a lot of overlap with the grindset culture you see on Twitter where people caution against taking breaks because your (mostly imaginary) competition may catch up with you.
Our QA & Support engineers have now started creating MR's to fix customer issues, satisfy customer requests and fix bugs.
They're AI sloppy and a bunch of work to fix up, but they're a way better description of the problem than the tickets they used to send.
So now instead of me creating a whole bunch of work for QA/Support engineers when I ship sub-optimal code to them, they're creating a bunch of work for me by shipping sub-optimal code to me.
Which is then more slop I have to review.
Our product is not SaaS, it's software installed on customer computers. Any bug that slips through is really expensive to fix. Careful review and code construction is worth the effort.
But I doubt companies and management will think for a second that this voluntary increase in "productivity" is any bad, and it will probably be encouraged
Have 10 people on staff.
Fire 5.
The remaining 5 have to do all the duties or get fired, for the same pay.
AI makes the easy part easier and the hard part harder
Having well rested employees that don't burn out is though.
Love this quote. For me, barely a few weeks in, I feel exactly this. To clarify - I feel this only when working on dusty old side projects. When I use it to build for the org its still a slog just faster.
We have been on this track for a long time: cars were supposed to save time in transit, but people started living farther from city centres (c.f. Marchetti's constant). E-Mail and instant messaging were supposed to eliminate wait time from postal services, but we now send orders of magnitude more messages and social norms have shifted such that faster replies are expected.
"AI" backed productivity increases are only impressive relative to non-AI users. The idilliac dream of working one or two days a week with agents in the background "doing the rest" is delusional. Like all previous technologies once it reaches mass adoption everyone will be working at a faster pace, because our society is obsessed with speed.
Arguably the only jobs which are necessary in society are related to food, heating, shelter and maybe some healthcare. Everything else - what most people are doing - is just feeding the never ending treadmill of consumer desire and bureaucratic expansion. If everyone adjusted their desired living standards and possessions to those of just a few centuries ago, almost all of us wouldn't need to work.
Yet here we are, still on the treadmill! It's pretty clear that making certain types of work no longer needed will just create new demands and wants, and new types of work for us to do. That appears to be human nature.
If before AI we were talking about 6 hours days as an aim, we should be talking about a 4 hour work day, without any reduction in pay.
Otherwise everyone is going to burn out.
- Average manager
“When computers first came out we were told:
‘Computers will be so productive and save you so much time you won’t know what to do with all of your free time!’
Unsurprisingly, that didn’t happen.”
Aka Jevon’s Paradox in practice
The kinds of productivity scaling they had been seeing to that point could be reasonably extrapolated to all kinds of industrial re-alignment.
Then we ran out of silver bullets.
[Still waiting to see what percentage of LLM hype is driven by people not having read The Mythical Man Month.]
What really happens is everybody adopts the same strategy and raises the work floor while demanding more.
Until we get rid of unlimited greed in humans we shouldn't expect a change.
Not sure if everyone shares this sentiment but the reason I use AI as a crutch is due to poor documentation that's out there, even simple terminal commands don't show use examples for ls when you try to type man ls. I just end up putting up with the code output because it works ok enough for short term, this however doesn't seem like a sustainable plan long term either.
There is also this dread I feel because what I would do if AI went down permanently? The tools I tried like Zeal really didn't do it for me either for documentation, not sure who decided on the documentation format but this "Made by professionals, for professionals" isn't really cutting it anymore. Apologies in advance if I missed out on any tools but in my 4+ years of university nobody ever mentioned any quality tools either, and I'm sure this trend is happening everywhere.
Sometimes you end up with tasks that are low intensity long duration. Like I need to supervise this AI over the course of three hours, but the task is simple enough that I can watch a movie while I do it. So people measuring my work time are like "wow he's working extra hours" but all I did during that time is press enter 50 times and write 3 sentences.
hlynurd•2h ago