> To directly measure the real-world impact of AI tools on software development, we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years. Developers provide lists of real issues (246 total) that would be valuable to the repository—bug fixes, features, and refactors that would normally be part of their regular work. Then, we randomly assign each issue to either allow or disallow use of AI while working on the issue. When AI is allowed, developers can use any tools they choose (primarily Cursor Pro with Claude 3.5/3.7 Sonnet—frontier models at the time of the study); when disallowed, they work without generative AI assistance. Developers complete these tasks (which average two hours each) while recording their screens, then self-report the total implementation time they needed. We pay developers $150/hr as compensation for their participation in the study.
So it's a small sample size of 16 developers. And it sounds like different tasks were (randomly) assigned to the no-AI and with-AI groups - so the control group doesn't have the same tasks as the experimental group. I think this could lead to some pretty noisy data.
Interestingly - small sample size isn't in the list of objections that the auther includes under "Addressing Every Objection You Thought Of, And Some You Didn’t".
I do think it's an interesting study. But would want to see if the results could be reproduced before reading into it too much.
I think that's where you get 10-20x. When you're working on niche stuff it's either not gonna work or work poorly.
For example right now I need to figure out why an ffmpeg filter doesn't do X thing smoothly, even though the C code is tiny for the filter and it's self contained.. Gemini refuses to add comments to the code. It just apologizes for not being able to add comments to 150 lines of code lol.
However for building an ffmpeg pipeline in python I was dumbfounded how fast I was prototyping stuff and building fairly complex filter chains which if I had to do by hand just by reading the docs it would've taken me a whole lot more time, effort and frustration but was a joy to figure out with Gemini.
So going back to the study, IMO it's flawed because by definition working on new features for open source projects wouldn't be the bread and butter of LLMs however most people aren't working on stuff like this, they're rewriting the same code that 10000 other people have written but with their own tiny little twist or whatever.
I thought it was the model, but then I realised, v0 is carried by the shadcn UI library, not the intelligence of the model
Example: using LeafletJS — not hard, but I didn't want to have to search all over to figure out how to use it.
Example: other web page development requiring dropping image files, complicated scrolling, split-views, etc.
In short, there are projects I have put off in the past but eagerly begin now that LLMs are there to guide me. It's difficult to compare times and productivity in cases like that.
When I'm working with platforms/languages/frameworks I am already deeply familiar with I don't think they save me much time at all. When I've tried to use them in this context they seem to save me a bunch of time in some situations, but also cost me a bunch of time in others resulting in basically a wash as far as time saved goes.
And for me a wash isn't worth the long-term cost of losing touch with the code by not being the one to have crafted it.
But when it comes to environments I'm not intimately familiar with they can provide a very easy on-ramp that is a much more pleasant experience than trying to figure things out through often iffy technical documentation or code samples.
Some of the most productive devs don't get paid by the big corps who make use of their open source projects, hence the constant urging of corps and people to sponsor projects they make money via.
What about countries? In my Poland $25k would be an amazing salary for a senior while in USA fresh grads can earn $80k. Are they more productive?
... at the same time, given same seniority, job and location - I'd be willing to say it wouldn't be a bad heuristic.
I made great money running my own businesses, but the vast majority of the programming was by people I hired. I’m a decent talent, but that gave me the ability to hire better ones than me.
Like can we determine the productivity of doctors, lawyers, journalists, or pastry chefs?
What job out there is so simple that we can meaningfully measure all the positive and negative effects of the worker as well as account for different conditions between workers.
I could probably get behind the idea that you could measure productivity for professional poker players (given a long enough evaluation period). Hard to think of much else.
Like what if by focusing on LLMs for productivity we just reinforce old-bad habits, and get into a local maxima... And even worse, what if being stuck with current so-so patterns, languages, etc means we don't innovate in language design, tooling, or other areas that might actually be productivity wins?
I expect it'll balance.
AI isn't very good at being concise, in my experience. To the point of producing worse code. Which is a strange change from humans who might just have a habit of being too concise, but not by the same degree.
In my experience, review was inadequate back before we had AI spewing forth code of dubious quality. There's no reason to think it's any better now.
An actually-useful AI would be one that would make reviews better, do them itself, or at least help me get through reviews faster.
Wouldn't it be the opposite? I'd expect the code would be 47% longer because it's worse and heavier in tech debt (e.g. code repeated in multiple places instead of being factored out into a function).
These were maintainers of large open source projects. It's all relative. It's clearly providing massive gains for some and not as much for others. It should follow that it's benefit to you depends on who you are and what you are working on.
It isn't black and white.
There are some very good findings though, like how the devs thought they were sped up but they were actually slowed down.
My analogy to this is seeing people spend time trying to figure out how to change colors, draw shapes in powerpoint, rather than focus on the content and presentation. So here, we have developers now focusing their efforts on correcting the AI output, rather than doing the research and improving their ability to deliver code in the future.
Hmm...
When I’m in the “zone” I wouldn’t go near an LLM, but when I’ve fallen out of the “zone” they can be useful tools in getting me back into it, or just finishing that one extra thing before signing off for the day
I think the right answer to “does LLM use help or hinder developer productivity” is “it depends on how you use them”
I guess the tricky bit is, nobody knows what the future looks like. "The internet is a fad" in 1999 hasn't aged well, but a lot of people touted 1960s AI, XML and 3d telivisions as things that'd be the tools in only a few years.
We're all just guessing till then.
They're not great at business logic though, especially if you're doing anything remotely novel. Which is the difficult part of programming anyway.
But yeah, to the average corporate programmer who needs to recreate the same internal business tool that every other company has anyway, it probably saves a lot of time.
How I measure performance is how many features I can implement in a given period of time.
It's nice that people have done studies and have opinions, but for me, it's 10x to 20x better.
Someone already operating at the very limit of their abilities doing stuff that is for them high complexity, high cognitive load, detail intense, and tactically non-obvious? Even a machine that just handed you the perfect code can't 20x your real output, even if it gave you the source file at 20x your native sophistication you wouldn't be able to build and deploy it, let alone make changes to it.
But even if it's the last 5-20% after you're already operating at your very limit and trying to hit your limit every single day is massive, it makes a bunch of stuff on the bubble go from "not realistic" to "we did that".
A key skill is to sense when the AI is starting to guess for solutions (no different to human devs) and then either lean into another AI or reset context and start over.
I'm finding the code quality increase greatly with the addition of the text 'and please follow best practices because will be pen tested on this!' and wow.. it takes it much more seriously.
Also I disagree. For web dev atleast, most people are just rewriting the same stuff in a different order. Even though the entire project might be complex from a high level perspective, when you dive into the components or even just a single route it ain't "high complexity" at all and since I believe most jobs are in web / app dev which just recycles the same code over and over again that's why there's a lot of people claiming huge boosts to productivity.
When you zoom in, even this kind of work isn't uniform - a lot of it is still shaving yaks, boring chores, and tasks that are hard dependencies for the work that is truly cognitively demanding, but themselves are easy(ish) annoyances. It's those subtasks - and the extra burden of mentally keeping track of them - that sets the limit of what even the most skilled, productive engineer can do. Offloading some of that to AI lets one free some mental capacity for work that actually benefits from that.
> Even a machine that just handed you the perfect code can't 20x your real output, even if it gave you the source file at 20x your native sophistication you wouldn't be able to build and deploy it, let alone make changes to it.
Not true if you use it right.
You're probably following the "grug developer" philosophy, as it's popular these days (as well as "but think of the juniors!", which is the perceived ideal in the current zeitgeist). By design, this turns coding into boring, low-cognitive-load work. Reviewing such code is, thus, easier (and less demoralizing) than writing it.
20x is probably a bit much across the board, but for the technical part, I can believe it - there's too much unavoidable but trivial bullshit involved in software these days (build scripts, Dockerfies, IaaS). Preventing deep context switching on those is a big time saver.
Like: Why isn’t this working? Here Claude read this like 90 page PDF and tell me where I went wrong interfacing with this SDK.
Ohh I accidentally passed async_context_background_threading_safe instead of async_context_thread_safe_poll and it’s so now it’s panicking. Wow that would have taken me forever.
I'm leaning into the future growth of AI capabilities to help me here, otherwise I'll have to do it myself.
That is a tomorrow problem, too much project structure/functionality to get right first.
I am wondering, what sort of tasks are you seeing these x20 boost?
I scoped out a body of work and even with the AI assisting on building cards and feature documentation, it came to about 2 to 4 weeks to implement.
It was done in 2 days.
The key I've found with working as fast as possible is to have planning sessions with Claude Code and make it challenge you and ask tons of questions. Then get it to break the work into 'cards' (think Jira, but they are just .md files in your repo) and then maintain a todo.md and done.md file pair that sorts and organizes work flow.
Then start a new context, tell it to review todo.md and pick up next task, and burn through it, when done, commit and update todo.md and done.md, /compact and you're off on the next.
It's more than AI hinting at what to do, it's a whole new way of working with rigor and structure around it. Then you just focus fire on the next card, and the next, and if you ever think up new features, then card it up and put it in the work queue.
Just yesterday I had some AI bro but into a conversation I was having with a good friend, and besides other rudeness he got proper mad because I kept telling him I understand what the models do and I am skeptical of his claims of future progress. All his arguments boiled down to wishful thinking and extending the line and he would NOT believe I understand the systems he was describing. I am still a bit sore about it. These guys are literally getting out of their way to spam me with their bs in real life now.
latenightcoding•6h ago
edit: should have mentioned the low-level stuff I work on is mature code and a lot of times novel.
justinko•6h ago
relaxing•6h ago
Falimonda•5h ago
hluska•5h ago
owebmaster•4h ago
hluska•5h ago
famahar•5h ago
sottol•6h ago
I ended shoehorned into backend dev in Ruby/Py/Java and don't find it improves my day to day a lot.
Specifically in C, it can bang out complicated but mostly common data-structures without fault where I would surely do one-off errors. I guess since I do C for hobby I tend to solve more interesting and complicated problems like generating a whole array of dynamic C-dispatchers from a UI-library spec in JSON that allows parsing and rendering a UI specified in YAML. Gemini pro even spat out a YAML-dialect parser after a few attempts/fixes.
Maybe it's a function of familiarity and problems you end using the AI for.
freeone3000•3h ago
moron4hire•6h ago
Recently, my company has been investigating AI tools for coding. I know this sounds very late to the game, but we're a DoD consultancy and one not traditional associated with software development. So, for most of the people in the company, they are very impressed with the AI's output.
I, on the other hand, am a fairly recent addition to the company. I was specifically hired to be a "wildcard" in their usual operations. Which is too say, maybe 10 of us in a company of 3000 know what we're doing regarding software (but that's being generous because I don't really have visibility into half of the company). So, that means 99.7% of the company doesn't have the experience necessary to tell what good software development looks like.
The stuff the people using the AI are putting out is... better than what the MilOps analysts pressed into writing Python-scripts-with-delusions-of-grandeur were doing before, but by no means what I'd call quality software. I have pretty deep experience in both back end and front end. It's a step above "code written by smart people completely inexperienced in writing software that has to be maintained over a lifetime", but many steps below, "software that can successfully be maintained over a lifetime".
IX-103•5h ago
You can tweak the prompt a bit to skew the probability distribution with careful prompting (LLMs that are told to claim to be math PHDs are better at math problems, for instance), but in the end all of those weights in the model are spent to encode the most probable outputs.
So, it will be interesting to see how this plays out. If the average person using AI is able to produce above average code, then we could end up in a virtuous cycle where AI continuously improves with human help. On the other hand, if this just allows more low quality code to be written then the opposite happens and AI becomes more and more useless.
leptons•1h ago
jack_h•3h ago
When it comes to software the entire reason maintainability is a goal is because writing and improving software is incredibly time consuming and requires a lot of skill. It requires so much skill and time that during my decades in industry I rarely found code I would consider quality. Furthermore the output from AI tools currently may have various drawbacks, but this technology is going to keep improving year over year for the foreseeable future.
dchftcs•1h ago
kannanvijayan•6h ago
In both of these cases, I found that just the smart auto-complete is a massive time-saver. In fact, it's more valuable to me than the interactive or agentic features.
Here's a snippet of some code that's in one of my recent buffers:
The actual code _I_ wrote were the comments. The savings in not having to type out the syntax is pretty big. About 80% of the time in manual coding would have been that. Little typos, little adjustments to get the formatting right.The other nice benefit is that I don't have to trust the LLM. I can evaluate each snippet right there and typically the machine does a good job of picking out syntactic style and semantics from the rest of the codebase and file and applying it to the completion.
The snippet, if it's not obvious, is from a bit of compiler backend code I'm working on. I would never have even _attempted_ to write a compiler backend in my spare time without this assistance.
For experienced devs, autocomplete is good enough for massive efficiency gains in dev speed.
I still haven't warmed to the agentic interfaces because I inherently don't trust the LLMs to produce correct code reliably, so I always end up reviewing it, and reviewing greenfield code is often more work than just writing it (esp now that autocomplete is so much more useful at making that writing faster).
sgc•5h ago
kannanvijayan•4h ago
cguess•5h ago
For frontend though? The stuff I really don't specialize in (despite some of my first html beginning on FrontPage 1997 back in 1997), it's a lifesaver. Just gotta be careful with prompts since so many front end frameworks are basically backend code at this point.
AstroBen•5h ago
sysmax•4h ago
Things like "apply this known algorithm to that project-specific data structure" work really well and save plenty of time. Things that require a gut feeling for how things are organized in memory don't work unless you are willing to babysit the model.