Personally, I’ve been seeing the number of changes for a PR starting to reach into the mid-hundreds now. And fundamentally the developers who make them don’t understand how they work. They often think they do, but then I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”
By no means am I down on AI, but I think proper procedures need to be put into place unless we want a giant bomb in our code base.
AI may be multi-threaded, but there's still a human, global interpreter lock in place. :D
If you put the code up for review, regardless of the source, you should fundamentally understand how it works.
This raises a broader point about AI and productivity: while AI promises parallelism, there's still the human in the middle who is responsible for the code.
The promise of "parallelism" is overstated.
100's of PRs should not be trusted. Or at least not without the c-suite understanding such risks. Maybe you're a small startup looking to get out the door as quickly as possible, so.. YOLO.
But it's going to be a hot mess. A "clean up in aisle nine" level mess.
Lots of companies just accept bugs as something that happens.
Calendar app for local social clubs? Ship it and fix it later.
B2B payments software that triggers funds transfers? JFC I hope you PIP people for that.
So I guess if you asked Claude why it did that, the truth of it might be "IDK I copy pasted from StackOverflow"
The same stuff pasted with a different sticker. Looks good to me.
You're right, saying you got something off SO would get you laughed out of programming circles back in the day. We should be applying the same shame to people who vibe code, not encourage it, if we want human-parseable and maintainable software.
For whom is this a danger for?
If we're paid to dig ditches and fill them, who are we to question our supreme leaders? They control the purse strings, so of course they know best.
Stack Overflow had enough provenance of copying and pasting. Models may not. Provenance remains a thing or it can add risk to the code.
I've also seen "Ask ChatGPT if you're doing X right?", and basically signing off whatever it recommends without checking
At this point I'm pretty confident I could trojan horse whatever decision I want from certain people by sending enough screenshots of ChatGPT agreeing with me
So, for example, by and large the orgs I've seen chucking Claude PRs over the wall with little review were previously chucking 100% human written PRs over the wall with little review.
Similarly, the teams I see effectively using test suites to guide their code generation are the same teams that effectively use test suites to guide their general software engineering workflows.
Ask pretty much any FOSS developer who has received AI-generated (both code and explanations) PRs on GitHub (and when you complain about these, the author will almost always use the same AI to generate responses) about their experiences. It's a huge time sink if you don't cut them off. There are plenty of projects out there now that have explicit policy documentation against such submissions and even boilerplate messages for rejecting them.
It's not, and yet I have seen that offered as an excuse several times.
100% my takeaway after trying to parallelize using worktrees. While Claude has no problem managing more than one context instance, I sure as hell do. It’s exhausting, to the point of slowing me down.
Inb4 the chorus of whining from AI hypists accusing you of being an coastal elitist intellectual jerk for daring to ask that they might want to LEARN something.
I am so over this anti-intellectual garbage. It's gotten to such a ridiculous place in our society and is literally going to get tons of people killed.
I strongly agree, however manager^x do not and want see report the massive "productivity" gains.
"Claude did that" is functionally equivalent to "idk I copied that from r/programming" and is totally unacceptable for a professional
I know many who have it on from high that they MUST use AI. One place even has bonuses tied not to productivity, but how much they use AI.
Meanwhile managers ask if AI is writing so much code why aren't they seeing it on topline productivity numbers.
That will work, but only until the people filing these PRs go crying to their managers that you refuse to merge any of their code, at which point you'll be given a stern reprimand from your betters to stop being so picky. Have fun vibe-reviewing.
At one of my jobs, the PRs are far less reviewable, but now the devs write tests when they didn’t used to bother. I’ve always viewed reviewing PRs as work without recognition, so I never did much of it anyway, but but now that there are passing tests, I often approve without more than a cursory skim.
So yes, it has made their work more productive for me to get off my plane.
No, like you I’m getting more PRs that are less reviewable.
It multiplies what you’re capable of. So you’ll get a LOT of low quality code from devs who aren’t much into quality.
When computers give different answers to the same questions it's a fundamental shift in how we work with them.
Using AI tooling means, at least in part, betting on the future.
It means betting on a particular LLM centric vision of the future.
I’m still agnostic on that. I think LLMs allow for the creation of a lot of one off scripts and things for people that wouldn’t otherwise be coding, but I have yet to be convinced that more AI usage in a sufficiently senior software development team is more valuable than the traditional way of doing things.
I think there’s a fundamental necessity for a human to articulate what a give piece of software should do with a high level of specificity that can’t ever be avoided. The best you can do is piggy back off of higher level language and abstractions that guess what the specifics should be, but I don’t think it’s realistic to think all combinations of all business logic and ui can be boiled down to common patterns that an LLM could infer. And even if that were true, people get bored/like novelty enough that they’ll always want new human created stuff to shove into the training set.
Could this be fixed by adjusting how tickets are scoped?
I would want someone entirely off of my team if they did that. Anyone who pushes code they don't understand at least well enough to answer "What does that do?" and "Why did you do it that way?" deserves for their PR to be flat out rejected in whole, not just altered.
"Then your job is to go ask Claude and get back to me. On that note, if that's what I'm paying you for now, I might be paying you too much..."
I'm really interested to see how the intersection of AI and accountability develops over the next few years. It seems like everyone's primary job will effectively be taking accountability for the AI they're driving, and the pay will be a function of how much and what kind of accountability you're taking and the overall stakes.
It's like buying a trinket just because it's cheap. It's still ultimately wasteful.
Also if you buy an ultimately useless trinket, well that's just life. Everything we do can be considered 'ultimately' useless.
Edit: "amirhirsch" user probably explained this better than me in an above comment.
It’s a shame too because it really could have been something so much more amazing. I’d imagine higher education would shift to how it used to be: a past-time for bored elites. We would probably see a large reduction in the middle class and its eventual destruction. First they went for manufacturing with its strong unions, now they go for the white-collar worker who has little solidarity for his common man (see lack of unions and ethics in our STEM field; most likely because we thought we could never be made redundant). Field by field the middle class will be destroyed and the lower class in thrall of addictive social media, substances, and the illusion of selection into the influencer petty-elite (which remain compliant because they don’t offer value proportional to the bribes they receive). The elites will have recreated the dynamic that existed for most of human history. Final point, see the obsession of current elites in using artificial insemination to create a reliable and durable contingent of heirs. Something previous rulers could only dream about in history.
It disgusts me and pisses me off so much.
The front-end jan.ai now has a feature where it has an:
>Interface for uploading (or specifying) a folder, then running the prompt on all files in the folder
https://github.com/menloresearch/jan/issues/4909#event-18973...
Hopefully that will allow me to batch process checks/invoices to get them named appropriately, we'll see.
And that was before Claude Code.
The problem is people trying to make the models do things that are too close to their limit. You should use LLMs for things they can ace already, not waste time trying to get it to invent some new algorithm. If I don't 0-3 shot a problem then I will just either do it manually or not do it.
Similarly to giving up on a Google search that you try a few times and nothing useful comes in the first few prompts. You don't keep at it the whole afternoon.
I'm not a programmer but from time to time would make automations or small scripts to make my job easier.
LLMs have made much more complex automations and scripts possible while making it dead simple to create the scripts I used to make.
A good code review and edit to remove the excess verbosity, and you got a feature done real fast.
Ask it for something at or above its limit then the code is very difficult to debug, difficult to understand has potentially misleading comments, and more. Knowing how to work with these overly confident coworkers is definitely a skill. I feel it varies from significantly from model to model as well.
Its often difficult to task other programmers with tasks at or above their limits too.
This is basically the definition of increased productivity and efficiency. Doing more stuff in the same amount of time. What I tell people who are anxious about whether their job might be automated away by AI is this:
We will never run out of the problems that need solving. The types of problems you spend your time solving will change. The key is to adapt your process to allocate your time to solving the right kinds of problems. You don’t want the be the person taking an hour to do arithmetic by hand, when you have access to spreadsheets.
And this has always been the case throughout all of human history.
Marx: workers sell their capacity to work for a fixed period, and any productivity improvements within that time become surplus value captured by capital.
AI tools are just the latest mechanism for extracting more output from the same wage. The real issue isn’t the technology—it’s that employees can’t capture gains from their own efficiency improvements. Until compensation models shift from time-based to outcome-based, every productivity breakthrough just makes us more profitable to employ, not more prosperous ourselves.
It’s the Industrial Revolution all over again and we’re the Luddites
Eventually they will have to care when things get bad enough -- and it's definitely trending that way fast [1]. But not today and not tomorrow.
So, they also benefit developers that become solopreneurs.
So they increase the next-best alternative for developers compared to work as employees.
What happens when you improve the next-best alternative?
> AI tools are just the latest mechanism for extracting more output from the same wage.
The whole history of software development has been rapid introduction of additional automation (because no field has been more the focus of software development than itself), and looking at the history of developer salaries, that has not been a process of "extracting more output from the same wage". Yes, output per $ wage has gone up, but real wages per hour or day worked for developers have also gone up, and done so faster than wages across the economy generally. It is true and problematic that the degree of capitalism in the structure of the modern mixed economy means that the gains of productivity go disproportionately to capital, but it is simply false to say that they go exclusively to capital across the board, and it is particularly easy to see that this has specifically been false in the case of productivity gains from further automation in software development.
This is what the whole four-day workweek movement is about; to reclaim some of that productivity increase as personal time. https://en.wikipedia.org/wiki/Four-day_workweek
Economist Keynes predicted one century ago that the workweek would drop to 15 hours due to rising productivity. It has not happened for social reasons.
I don't know what's going to happen when humans become redundant; that's an incipient issue we'll have to grapple with.
Software development as a career will evaporate in the next decade, as will most "knowledge" work such as general medicine, law, and teaching. Surgeons and dentists will continue a bit longer.
Bottom line, most of us will be doing chores while the machines do all the production and creative work.
Unfortunately, it is always a deliberate lie by the people who stand to gain from the new technology. Anyone who has thought about it for five seconds knows that this is not how capitalism works. Productivity gains are almost immediately absorbed and become the new normal. Firms that operate at the old level of productivity get washed out.
I simply can't believe that we're still falling for this. But let's hold out hope. Maybe AGI is just around the corner, and literally everyone in the world will spend our time sipping margaritas on the beach while we count our UBI. Certainly AI could never accelerate wealth concentration and inequality, right? RIGHT?
"But here's the kicker"
"It's not x. It's y."
"The companies that foo? They bar."
Em-dashes galore.
I'm either hypersensitized, seeing ghosts, or this article got the "yo claude make it pop" treatment. It's sad, but anything overly polished immediately triggers some "is this slop or an original thought actually worth my time" response.
New technology often homogenizes and makes things boring for awhile.
> But here’s the kicker: we were told AI would free us up for “higher-level work.” What actually happened? We just found more work to fill the space. That two-hour block AI created by automating your morning reports? It’s now packed with three new meetings.
But then a few sentences later, she argues that tools made us less productive.
> when developers used AI tools, they took 19% longer to complete tasks than without AI. Even more telling: the developers estimated they were 20% faster with AI—they were completely wrong about their own productivity.
Then she switches back to saying it saves us time but cognitive debt!
> If an AI tool saves you 30 minutes but leaves you mentally drained and second-guessing everything, that’s not productivity—that’s cognitive debt.
I think you have to pick one, or just admit you don't like AI because it makes you feel icky for whatever reason so you're going to throw every type of argument you can against it.
The right answer to this is: speak up for yourself. Dumping your feelings into HN or Reddit or your blog can be a temporary coping mechanism but it doesn't solve the problem. If you are legitimately working your tail off and not able to keep up with your workload, tell your manager or clients. Tactfully, of course. If they won't listen, then its time to move on. There are reasonable people/companies to work with out there, but they sometimes take some effort to find.
At first, we spend our time one way (say eight hours, just to pick a number). Then we get the tools to do all of that in six hours. Then when job seeking and hiring, we get one worker willing to work six hours and another willing to work eight, so the eight-hour worker gets the job, all else equal. Labor is a marketplace, so we work as much as we're willing to in aggregate, which is roughly constant over time, so efficiency will never free up individuals' time.
In the context of TFA, it means we just shift our time to "harder" work (in the sense of work that AI can't do yet).
Here is an example.
I decided to create a new app, so I write down a brief of what it should do, ask AI to create a longer readme file about the platform along with design, sequence diagram, and suggested technologies.
I review that document, see if there is anything I can amend, then ask AI for the implementation plan.
Up until this point, this would probably increased the time I usually use to describe the platform in writing. But realistically, designing and thinking about systems were never that fast. I would have to think about use cases, imagine workflows in my mind, do pen and paper diagrams which I don’t think any of the productivity reports are covering.
The most junior dev on my team was tasked with setting up a repo for a new service. The service is not due for many many months so this was an opportunity to learn. What we got was a giant PR with hundreds of new configurations no one has heard of. It's not bad, it's just that we don't know what each conf does. Naturally we asked him to explain or give an overview, he couldn't. Well because he fed the whole thing to an LLM and it spat out the repo. He even had fixes for bugs we didn't know we had in other repos. He didn't know either. But it took the rest of the team digging in to figure out what's going on.
I'm not against using LLM, but now I've added a new step in the process. If anyone makes giant PRs, they'll also have to make a presentation to give everyone an overview. With that in mind, it forces devs to actually read through the code they generate and understand it.
Don't allow giant PRs without a damn good reason for them. Incremental steps, with testing (automated or human) to verify their correctness, that take you from a known-good-to-known-good state.
you will never be given your time back by an employer. you have to take it. you might be able to ask for it, but it won't be freely given, whether or not you become more efficient. LLM chatbots and agents are, in this sense, just another tool that changes our relationship to the work we do (but never our relationship to work).
ctoth•51m ago
I'm not sure that Claude saves me time -- I just spent my weekend working on a Claude Code Audio hook with Claude which I obviously wouldn't have worked on elsewise, and that's hardly the gardening I intended to do ... but man it was fun and now my CC sessions are a lot easier to track by ear!