Speed is part of fluency and almost a shortcut to explaining the goal in real terms. Nobody is hunting and pecking at 80wpm.
The fact is that programming is not about typing lines of code into an editor
Improving typing speed from "fast" to "faster" is very difficult. I think it's worth distinguishing between "typing faster is not useful" and "it's not worth the effort to try to type much faster".
There are sometimes cases where it's worth paying a high cost even for some marginal benefit.
Loads of business people also think that code is some magical incantations and somehow clicking around in menu configuring stuff is somehow easier.
For a lot of people reading is hard but no one will admit that. For years I was frustrated and angry at people because I didn't understand that someone can have trouble reading while they are proper adult working business role.
I also see when I post online how people misread my comments.
GUI libraries are a pretty good example of this; almost all the time, you're probably gonna parse the form fields in the exact same way each time, but due to how GUI libraries work, what ends up happening is that you often write multiple lines of function calls where the only difference really is the key you're using to get the variables. You can't really turn it into a function or something like that either; it's just lines of code that have to be written to make the things work and although it should be really easy to predict what you need to do (just update the variable name and the string used in the function call), it can end up wasting non-marginal time.
LLMs being able to help with this sort of thing I would however more consider to be a failure of IDEs being unable to help with it properly than anything else. This sort of task is rote, easy to predict and should even be autogeneratable. Some IDEs even let you, but it's typically hidden in a menu pretty deep in the interface, needing to be enabled by messing with their ever increasing settings menus (when it probably could just be something it can autodetect by checking the file; y'know, that's the reason why people use IDEs instead of a notepad program); it's as if at some point, IDEs changed from assisting you with making code quicker to write to only really being able to somewhat inspect and lint your codebase unless you spend hours configuring them to do otherwise. I mean, that was in part why Sublime Text and VS Code got their foot in the door, even though they have a much smaller feature list than most traditional IDEs; compared to IDEs they're lightweight (which is pretty crazy since VS Code is an Electron app) and they provide pretty much equivalent features for most people. LLMs can often predict what's going to happen next after you've written two or three of these rote lines, which is a pretty good way to get the boring stuff out of the way.
Is that worth the sheer billions of dollars thrown at AI? Almost certainly not if you look at the entire industry (its a massive bubble waiting to pop), but on the customer fees end, for now the price-to-time-saved ratio for getting rid of that rote work is easily worth it in a corporate environment. (I do expect this to change once the AI bubble pops however.)
Either the bottleneck between product organizations and engineering on getting decent requirements to know what to build and engineering teams being unwilling to start until they have every I dotted and T crossed.
The backend of the problem is that already most of the code e see written is poorly documented across the spectrum. How many commit messages have we seen of "wip" for instance? Or you go to a repository and the Readme is empty?
So the real danger is the stack overflow effect on steroids. It's not just a block of code that was put in that wasn't understood, its now entire projects, and there's little to no documentation to explain what was done or why decisions were made.
If the developer is not savvy about the business case, he cannot have that vision, and all he can do is implement requirements as described by the business, which itself doesn't sufficiently understand technology to build the right path.
The tricky part is always the action plan: how do we achieve X in steps without blowing budget/time/people/other resources?
As a programmer I can see all the rough edges but that doesn't seem to bother the other 99% of people on the group who use it.
It's a world away from when the industry began. There's a great story from Bill Gates about a time when his ability to simply write code was an incredibly scarce resource. A company was so desperate for programmers that they hired him and Paul Allen as teenagers:
"So, they were paying penalties... they said, 'We don’t care [that they are kids].' You know, so I go down there. You know, I’m like 16, but I look about 13. They hire us. They pay us. It’s a really amazing project... they got a kick out of how quickly I could write code."
That story is a powerful reminder of how much has changed. Writing code was the bottleneck years ago. However the core problem has shifted from "How do we build it?" to "What should we build and is there a business for it?"I think it's credible to say that it was just market demand. Marc Andreessen's main complaint before the AI boom was that "there is more capital available than there are good ideas to fund". Personally, I think that's out of touch with reality, but he's the guy with all the money and none of the ideas, so he's a credible fist-hand source.
There is immense, unmet demand for good software in developing countries—for example, robust applications that work well on underpowered phones and low-bandwidth networks across Africa or Southeast Asia. These are real problems waiting for well-executed ideas.
The issue isn't a lack of good ideas, but a VC ecosystem that throws capital at ideas of dubious utility for saturated markets, while overlooking tangible, global needs because they don't fit a specific hyper-growth model.
I do believe that these also fit the hyper-growth model. It's rather that these investors have a very US-centric knowledge of markets and market demands, and thus can simply barely judge ideas that target very different markets.
Also, he's a VC, but where more funding even in pure software is needed are sustainable businesses that don't have ambition to take over the world, but rather serve their customer niche well.
I think we have a tendency to overestimate efficiency... because of the central roles it plays at the margins that mattered to us at any given time. .
But the economy is bottlenecked in complex ways. Market demand, money, etc.
It's not obvious that 100X more code is something we can use.
The capability to write high-quality code and have a deep knowledge about it is still a scarce resource.
The difference from former days is rather that the industry began to care less about this.
Back when these tools did not exist yet, a lot of this knowledge didn't exist yet. Software now is built on the shoulders of giants. You can write a line of code and get a window in your operating system, people like Bill Gates and his generation wrote the low level graphics code and had to come up with the concept of a window first, had to invent the fundamentals of graphics programming, had to wait and interact with hardware vendors to help make it performant.
No it wasn't. It never was.
I've even had code submitted to me by juniors which didn't make any sense. When I ask them why they did that, they say they don't know, the LLM did it.
What this new trend is doing is generating a lot of noise and overhead on maintenance. The only way forward, if embracing LLMs, is to use LLMs also for the reviewing and maintenance, which obviously will lead to messy spaghetti, but you now have the tools to manage that.
But the important realization is that for most businesses, quality doesn't really matter. Throwaway LLM code is good enough, and when it isn't you can just add more LLM on top until it does what you think you need.
I can't imagine a professional software developer in a position of authority leaving that statement unchallenged and uncorrected.
If a person doesn't stand behind the code they write, they shouldn't be employed. Full stop.
This should resolve itself via rounds of redundancies, probably targetting the senior engineers that are complaining about the juniors, then by insolvency.
As I get older I spend more of my coding time on walks, at the whiteboard, reading research, and running experiments
Reminds me of a former colleague of mine, I'd sit next to him and get frustrated because he was a two-finger typer. But, none of his code was wasted. I frequently write code, then cmd+z back to ten minutes ago or just `git checkout .` because I lost track.
Not only did they produce about the same amount of code in a day that they used to produce in a week (or two), several other things made my work harder than before:
- During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).
- The time spent on trivial issues went down a lot, almost zero, the remaining issues were much more subtle and time-consuming to find and describe.
- Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be. This breakdown of pattern-matching compared to "organic" code made the overhead much higher. Spending decades reviewing code and answering Stack Overflow questions often makes it possible to pinpoint not just a bug but how the author got there in the first place and how to help them avoid similar things in the future.
- A simple, but bad (inefficient, wrong, illegal, ugly, ...) solution is a nice thing to discuss, but the LLM-assisted junior dev often cooks up something much more complex, which can be bad in many ways at once. The culture of slowly growing a PR from a little bit broken, thinking about design and other considerations, until its high quality and ready for a final review doesn't work the same way.
- Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.
This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves. The junior dev would feel (I assume) much more productive and competent, but the response to their work would eventually lack most of the usual enthusiasm or encouragement from senior devs.
How do people work with these issues? One thing that worked well for me initially was to always require a lot of (passing) tests but eventually these tests would suffer from many of the same problems
I don't understand, if they don't test the code they write (even if manually) it's not an LLM issue, it's a process one.
They have not been taught what does it mean to have a PR ready for being reviewed, LLMs are irrelevant here.
You think about the implementation and how it can fail. If you don’t think about the implementation, or don’t understand the implementation, I would argue that you can earnestly try to test, but you won’t do a good job of it.
The issue of LLMs here is the proliferation of people not understanding the code they produce.
Having agents or LLMs review and understand and test code may be the future, but right now they’re quite bad at it, and that means that the parent comment is spot on; what I see right now is people producing AI content and pushing the burden of verification and understanding to other people.
Where was the burden prior to LLM's?
if a junior cannot prove his/her code as working and have an understanding, how was this "solved" before llm? Why can't the same methods work post-llm? Is it due to volume? If a junior produces _more_ code they don't understand, it doesn't give them the right to just skip PR/review and testing etc.
If they do, where's upper management's role here then? The senior should be bringing up this problem and work out a better process and get management buy-in.
This is of course especially significant in codebases that do not have strict typing (or any typing at all).
Let's ignore the code quality or code understanding: these juniors are opening PRs, according to the previous user, that simply do not meet the acceptance criteria for some desired behavior of the system.
This is a process, not tools issue.
I too have AI-native juniors (they learned to code along copilot or cursor or chatgpt) and they would never ever dare opening a PR that doesn't work or does not meet the requirements. They may miss some edge case? Sure, so do I. That's acceptable.
If OP's are, they have not been taught that they have to ask for feedback when their version of the system does what it needs to.
Catching this is my job, but it becomes harder if the PR actually has passing tests and just "looks" good. I'm sure we'll develop the culture around LLMs to make sure to teach new developers how to think, but since I learned coding in a pre-LLM world, perhaps I take a lot of things for granted. I always want to understand what my code does, for example - that never seemed optional before - but now it seems to get you much further than just copy-pasting stuff from Stack Overflow ever did.
I think we've always had this mental model which needs to change that senior engineers and product managers scope and design features, IC developers (including juniors for simpler work) implement them, and then senior engineers participate in code review.
Right now I can't see the value in having a junior engineer on the team who is unable to think about how certain features should be designed. The junior engineer who previously spent his time spinning tires trying to understand the codebase and all the new technologies he has to get to grips with should instead spend that time trying to figure out how that feature fits into the big picture, consider edge cases, and then propose a design for the feature.
There are many junior engineers who I wouldn't trust with that kind of work, and honestly I don't think they are employable right now.
In the short term, I think you just need to communicate this additional duty of care to make sure that your pull requests are complete because otherwise there's an asymmetry of workload and judge those interns and juniors on how respectful of that they are.
The issue with LLM tools is that they don't teach this. The focus is always on getting to the end result as quickly as possible, skipping any of the actually important parts of software development. The way problem solving is approached with LLMs is by feeding them back to the LLM, not by solving them yourself. This is another related issue: relying on an LLM doesn't give you software development experience. That is gained by actually solving problems yourself; understanding how the system works, finding the underlying root cause, fixing it in an elegant way that doesn't create regressions, writing robust tests to ensure it doesn't happen again, etc. This is the learning experience. LLMs can help with this, but they're often not used in this way.
Well that sucks because that just means the pipeline for engineers to become seniors is completely broken
AI creates the same problem for hiring too: it generates the appearance of knowledge. The problem you and I have as evaluators of that knowledge is there is no other interface to knowledge than language. In a way this is like the oldest philosophy problem in existence. Socrates spent an inordinate amount of time railing against the sophists, people concerned with language and argument rather than truth. We have his same problem, only now on an industrial scale.
To your point about tests, I think the answer is to not focus on automated tests at first (though of course you should have those eventually), but instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.
There’s a reason no one does it. Because it’s inefficient. Even in recorded video format. The helpful things are tests and descriptives PRs. The former because its structure is simple enough that you can judge it, and the test run can be part of the commit. The second is for the simple fact that if you can write clearly about your solution, I can the just do a diff of what you told me and what the code is doing, which is way faster than me trying to divine both from the code.
But software development is about producing written artifacts. We actually need the result. We care a lot less about whether or not the developer has a particular understanding of the world. A cursor-written implementation of a login form is of use to a senior engineer because she actually wants a login form.
We actually should because the developer has to maintain and extend the damned thing in the future
1. The invention of THE CONCEPT BEHIND THE MACHINE. In our context, this is "Programming as Theory Building." Our programs represent some conception of the world that is NOT identical to the source code, much the way early precision tools embodied philosophies like interchangeability.
2. The building of the machine itself, which has to function correctly. To your point, this is one of the major things we care about, but I don't agree it's the only thing. In the code world this IS the code, to your point. When this is all we think about, though, I think you get spaghetti code bases and poorly trained developers.
3. Training apprentices in both the ideas and the craft of producing machines.
You can argue we should only care about #2, many businesses certainly incentivize thinking in that direction, but I think all 3 are important. Part of what makes coding and talking about coding tricky is that written artifacts, even the same written artifacts, express all 3 of these things and so matters get very easily confused.
Leetcode Zoom calls always were marginal, now with chat AI they're virtually useless though still the norm.
I claim that this approach is sustainable.
The idea behind the "I read all of your code and give feedback." methodology is that the writer really put a lot of deep effort into making sure that the code is of great quality - and then he is expecting feedback, which is often valuable. As long as you can with some effort find out by yourself how improvements could be done, don't bother asking for someone else's time/
The problem is thus that the writers of "vibe-generated code" hardly ever put such a deep effort into the code. Thus the code is simply not worth asking feedback for.
- you need to think through the product more, really be sure it’s as clarified as it can be. Everyone has their own process, but it looks like rubber ducking, critiquing, breaking work into phases, those into tasks, etc. (jobs to be done, business requirement docs, domain driven design planning, UX writing product lexicon docs, literally any and all artifacts)
- Prioritize setting up tooling and feedback loops (code quality tools of any and every kind, are required). this includes custom rules to help enforce anything you decided during planning. Spent time on this and life will be a lot better for everyone.
- We typically making very very detailed plans, and then the agents will “IVI” it (eg automatic linting, single test, test suite, manual evaluation).
You basically set up as many and as diverse of automatic feedback signals as you can.
—-
I will plan and document for 2-4 hours, then print a bunch of small “PRDs” that are like “1 story point” small. There’s clear definitions of done.
Doing this, I can pretty much go the gym or have meetings or whatever for 1-2 hours hands off.
—-
1. Mostly written by LLMs, and only superficially reviewed by humans.
2. Written 50-50% by devs and LLMs. Reviewed to the same degree as now.
Software of type 2 will be more expensive and probably of higher quality. Type 1 software will be much much more common, as it will be cheaper. Quality will be lower, but the open question is whether it will be good enough for the use cases of cheap mass produced software. This is the question that is still unanswered by practical experience, and it's the question that all the venture capitalists a salivating about.
Not sure what to tell you otherwise. The code is much more thought through, with more tests, and better docs. There’s even entire workflows for the CI portion and review.
I would look at workflows like this as augmentation than automation.
I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether, or they never cared to do them in the first place. Then the burden for maintaining code quality is on the few who actually care, which has now grown much larger because of the amount of code that's thrown at them. Unfortunately, these people are often seen as pedants and sticklers who block PRs for no good reason. That sometimes does happen, but most of the time, these are the folks who actually care about the product shipped to users.
I don't have a suggestion for improving this, but rather a grim outlook that it's only going to get worse. The industry will continue to be flooded by software developers trained on LLM use exclusively, and the companies who build these tools will keep promoting the same marketing BS because it builds hype, and by extension, their valuation.
I think that's probably true, but I think there are multiple layers here.
There's what's commonly called vibe coding, where you don't even look at the code.
Then there's what I'd call augmented coding, where you generate a good chunk of the code, but still refactor and generally try to understand it.
And then there's understanding every line of it. For this, in particular, I don't believe LLMs speed things up. You can get the LLM to _explain_ every line to you, but what I mean is to look at documentation and specs to build your understanding and test out fine grained changes to confirm it. This is something you naturally do while writing code, and unless you type comically slow, I'm not convinced it's not faster this way around. There's a very tight feedback loop when you are writing and testing code atomically. In my experience, this prevents an unreasonable amount of emergencies and makes debugging orders of magnitude faster.
I'd say the bulk of my work is either in the second or the third bucket, depending on whether it's production code, the risks involved etc.
These categories have existed before LLMs. Maybe the first two are cheaper now, but I've seen a lot of code bases that fall into them - copy pasting from examples and SO. That is, ultimately, what LLMs speed up. And I think it's OK for some software to fall into these categories. Maybe we'll see too much fall into them for a while. I think eventually, the incredibly long feedback cycles of business decisions will bite and correct this. If our industry really flies off the handle, we tend to have a nice software crisis and sort it out.
I'm optimistic that, whatever we land on eventually, generative AI will have reasonable applications in software development. I personally already see some.
What boggles my mind is people are writing code that’s the foundation of products like that.
Maybe it’s imposter syndrome though to think it wasn’t already being done before the rise of LLMs
It may well have been happening before the rise of LLMs, but the volume was a lot more manageable
Now it's an unrestricted firehose of crap that there just not enough good devs to wrangle
The volume here is orders of magnitude greater, but that’s the closest example I can think of.
Tech exec here. It is all about gamed metrics. If the board-observed metric is mean salary per tech employee, you'll get masses of people hired in india. In our case, we hire thousands in India. Only about 20% are productive, but % productive isnt the metric, so no one cares. You throw bodies at the problem and hope someone solves it. Its great for generations of overseas workers, many of whom may not have had a job otherwise. You probably have dozens of Soham Parekhs .
Western execs also like this because it inflates headcount, which is usually what exec comp is based on "i run a team of 150.." Their lieutenants also like it because they can say "i run a team of 30", as do their sub-lieutenants "i run a team of 6"
Can the business afford to ship something that fails for 5% of their users? Can they afford to find out before they ship it or only after? What risks do they want to take? All business decisions. In my CTO jobs and fractional CTO work, I always focused on exposing these to the CEO. Never a "no", always a "here's what I think our options and their risks and consequences are".
If sound business decisions lead to vibe coding, then there's nothing wrong with it. It's not wrong to loose a bet where you understood the odds.
And don't worry about businesses that make uniformed bets. They can get lucky, but by and large, they will not survive against those making better informed bets. Law of averages. Just takes a while.
Sure, technical decisions ultimately depend on a cost-benefit analysis, but the companies who follow this mentality will cut corners at every opportunity, build poor quality products, and defraud their customers. The unfortunate reality is that in the startup culture "move fast and break things" is the accepted motto. Companies can be quickly started on empty promises to attract investors, they can coast for months or years on hype and broken products, and when the company fails, they can rebrand or pivot, and do it all over again.
So making uninformed bets can still be profitable. This law of averages you mention just doesn't matter. There will always be those looking to turn a quick buck, and those who are in it for the long haul, and actually care about their product and customers. LLMs are more appealing to the former group. It's up to each software developer to choose the companies they wish to support and be associated with.
It’s rare that startups gain traction because they have the highest quality product and not because they have the best ability to package, position, and market it while scaling all other things needed to mane a company.
They might get acqui-hired for that reason, but rarely do they stand the test of time. And when they do, it almost always because founders stepped aside and let suits run all or most of the show.
And yes, there is enshittification, there is immoral actors. The market doesn't solve these problems, if anything, it causes them.
What can solve them? I have only two ideas:
1. Regulation. To a large degree this stops some of the worst behaviour of companies, but the reality in most countries I can think of is that it's too slow, and too corrupt (not necessarily by accepting bribes, also by wanting to be "an AI hub" or stuff like that) to be truly effective.
2. Professional ethics. This appears to work reasonably well in medicine and some other fields, but I have little hope our field is going to make strides here any time soon. People who have professional ethics either learn to turn it off selectively, or burn out. If you're a shady company, as long as you have money, you will find competent developers. If you're not a shady company, you're playing with a handicap.
It's not all so black and white for sure, so I agree with you that there's _some_ power in choosing who to work for. They'll always find talent if they pay enough, but no need to make it all too easy for them.
LLM “vibe coding” is another continuation of this “new hotness”, and while the more seasoned developers may have learned to avoid it, that’s not the majority view.
CEOs and C-suites have always been disconnected from the first order effects of their cost-cutting edicts, and vibe coding is no different in that regard. They see the ten dollars an hour they spend on LLMs as a bargain if they can hire a $30 an hour junior programmer instead of a $150 an hour senior programmer.
They will continue to pursue cost-cutting, and the advent of vibe coding matches exactly what they care about: software produced for a fraction of the cost.
Our problem — or the problem of the professionals - is that we have not been successful in translating the inherent problems with the CEOs approach to a change in how the C-suite operates. We have not successfully pursuaded them that higher quality software = more sales, or lower liability, or lower cost maintenance, and that partially because we as an industry have eschewed those for “move fast and break things”. Vibe coding is “Move Fast and Break Things” writ large.
This depends a lot on the "programming culture" from which the respective developers come. For example, in the department where I work (in some conservative industry) it would rather be a tough sell to use a new, shiny framework because the existing ("boring") technologies that we use are a good fit for the work that needs to be done and the knowledge that exists in the team.
I rather have a feeling that in particular the culture around web development (both client- and server-side parts) is very prone to this phenomenon.
In the Venn diagram of the programming culture of the companies that embrace vibe coding and the companies whose developers like to rewrite applications when a new framework comes out is almost a perfect circle, however.
These devs don't get any value whatsoever from LLM, because explaining it to the LLM takes longer then doing it themselves.
Personally, I feel like everything besides actually vibe coding + maybe sanity checking via a quick glance is a bad LLM application at this point in time.
Youre just inviting tech dept if you actually expect this code to be manually adjusted at a later phase. Normally, code tells a story. You should be able to understand the thought process of the developer while reading it - and if you can't, there is an issue. This pattern doesn't hold up for generated code, even if it works. If an issue pops up later, you'll just be scratching your head what this was meant to do.
And just to be clear: I don't think vibe coding is ready for current enterprise environments either - though I strongly suspect it's going to decimate our industry once tooling and development practices for this have been pioneered. The current models are already insanely good at coding if provided the correct context and prompt.
E.g. countless docs on each method defining use cases, force the LLM to backtrack through the code paths before changes to automatically determine regressions etc. Current vibe coding is basically like the original definition of a hacker: a person creating furniture with an Axe. It basically works, kinda.
I think this follows a larger pattern of AI. It helps someone with enough maturity to not rely on it too blindly and enough foresight to know they still need to grow their own skills, but does well enough that those looking for an easy or quick answer is now given that tool that lets them skip doing more of the hard work. It empowers seniors (developer or senior level in unrelated fields) but traps juniors. Same as using AI to solve a math problem. Is the student verifying their own solution against the AI's, or copying and pasting while thinking they are learning by doing so (or even recognizing their aren't but not worrying about it since the AI can handle it and not realizing how this will trap them on ever harder problems in the future).
>...but rather a grim outlook that it's only going to get worse. The industry will continue to be flooded by software developers...
I somewhat agree, but even more grim, I think we are looking at this across many more fields than just software development. The way companies make use of this and the market forces at the corporate level might be different, but it is also impacting education and that alone should be enough to negatively impact other areas.
This reminded me of a quarter million dollar software project one of my employers had contracted to a team in a different country. On the face of it - especially if you go and check by the spec sheet - everything was there but the thing was not a cohesive whole. They did not spend one second beyond the spec sheet and none of the common sense things that "follow" from the spec were there. The whole thing was scrapped immediately.
With LLMs this kind of work now basically becomes free to do and automatic.
Good experienced devs will be able to make better software, but so many inexperienced devs will be regurgitating so much more lousy software at a pace never seen before, it's going to be overwhelming. Or as the original commenter described, they're already being overwhelmed.
Even better if the accountants are using LLMs.
Or even better, hardware prototyping using LLMs with EEs barely knowing what they are doing.
So far, most software dumbassery with LLMs can at least be fixed. Fixing board layouts, or chip designs, not as easy.
I will have my word in the matter before all is said and done. While everyone is busy pivoting to AI I keep my head down and build the tools that will be needed to clean up the mess...
I'm building a universal DOM for code so that we should see an explosion in code whose purpose is to help clean up other code.
If you want to write code that makes changes to a tree of HTML nodes, you can pretty much write that code once and it will run in any web browser.
If you want to write code that makes a new program by changing a tree of syntax nodes, there are an incredible number of different and wholly incompatible environments for that code to run in. Transform authors are likely forced to pick one or two engines to support, and anyone who needs to run a lot of codemods will probably need to install 5-10 different execution engines.
Most people seem not to notice or care about this situation or realize that their tools are vastly underserving their potential just because we can't come up with the basic standards necessary to enable universal execution of codemod code, which also means there are drastically lower incentives to write custom codemods and lint rules than there could/should be
As two nits, https://docs.bablr.org/reference/cstml and https://bablr.org/languages/universe/ruby are both 404, but I suspect that latter one is just falling into the same trap as many namespaces make of using a URL when they meant it as a URN
The JSX noise is CSTML, a data format for encoding/storing parse trees. It's our main product. E.g. a simple document might look something like `<*BooleanLiteral> 'true' </>`. It's both the concrete syntax and the semantic metadata offered as a single data stream.
The easiest way to consume a CSTML document is to print the code stored in it, e.g. `printSource(parseCSTML(document))`, which would get you `true` for my example doc. Since we store all the concrete syntax printing the tree is guaranteed to get you the exact same input program the parser saw. This means you can use this to rearrange trees of source code and then print them over the original, allowing you to implement linters, pretty-printers, or codemod engines.
These CSTML documents also contain all the information necessary to do rich presentation of the code document stored within (syntax highlighting). I'm going to release our native syntax highlighter later today hopefully!
Also if you're organizationally changing the culture to force people to put more effort in writing the code, why are you even organizationally using LLMs...?
Yeah, OK, I guess you have to be a bit less unapologetic than Linux kernel maintainers in this case, but you can still shift the culture towards more careful PRs I think.
> why are you even organizationally using LLMs
Many people believe LLMs make coders more productive, and given the rapid progress of gen AI it's probably not wise to just dismiss this view. But there need to be guardrails to ensure the productivity is real and not just creating liability. We could live with weaker guardrails if we can trust that the code was in a trusted colleague's head before appearing in the repo. But if we can't, I guess stronger guardrails are the only way, aren't they?
But when I actually sit down and think it through, I’ve wasted multiple days chasing down subtle bugs that I never would have introduced myself. It could very well be that there’s no productivity gain for me at all. I wouldn’t be at all surprised if the numbers showed that was the case.
But let’s say I am actually getting 20%. If this technology dramatically increases the output of juniors and mid level technical tornadoes that’s going to easily erase that 20% gain.
I’ve seen codebases that were dominated my mid level technical tornadoes and juniors, no amount of guardrails could ever fix them.
Until we are at the point where no human has to interact with code (and I’m skeptical we will ever get there short of AGI) we need automated objective guardrails for “this code is readable and maintainable”, and I’m 99.999% certain that is just impossible.
Usually organizational changes are massive efforts. But I guess hype is a hell of an inertia buster.
I imagine if you have a say in their performance review, you might be able to set "writes code more thoughtfully" as a PIP?
Simply hire people who score high on the Conscientiousness, but low on the Agreeableness personality trait. :-)
Folks, we already have bad software. Everywhere.
And nobody cares.
The promise then was similar: "non-programmers" could use a drag-and-drop, WYSIWYG editor to build applications. And, IMO, VB was actually a good product. The problem is that it attracted "developers" who were poor/inexperienced, and so VB apps developed a reputation for being incredibly janky and bad quality.
The same thing is basically happening with AI now, except it's not constrained to a single platform, but instead it's infecting the entire software ecosystem.
Yes there were a lot of crappy barely functioning programs made in it. But they were programs that wouldn’t have existed otherwise. Eg. For small businesses automating things vb was amazing and even if the program was barely functional it was better than nothing.
I think we will need to find a way to communicate “this code is the result of serious engineering work and all tradeoffs have been thought about extensively” and “this code has been vibecoded and no one really cares”. Both sides of that spectrum have their place and absolutely will exist. But it’s dangerous to confuse the two
Large companies can be a red tape nightmare for getting anything built. The process overload will kill simple non-strategic initiatives. I can understand and appreciate less technical people who grab whatever tool they can to solve their own problems when they run into blockers like that. Even if they don't solve it in the best way possible according to experts in the field. That feels like the hacker spirit to me.
Why do you believe we should "turn our back on AI"? Have you used it enough to realize what a useful tool it can be?
Wouldn't it make more sense to learn to turn our backs on unhelpful uses of AI?
I think we already are. We're about to be drowning in a cesspit. The support for the broken software is going to be replaced by broken LLM agents.
I lowkey disagree. I think good experienced devs will be pressured to write worse software or be bottlenecked by having to deal with bad software. Depends on company and culture of course. But consider that you as expereinced dev now have to explain things that go completely over the head of the junior devs, and most likely the manager/PO, so you become the bottleneck, and all pressure will come down on you. You will hear all kinds of stuff like "80% there is enough" and "dont let perfect be the enemy of good" and "youre blocking the team, we have a deadline" and that will become even worse. Unless you're lucky enough to work in a place with actually good engineering culture.
That's my expectation as well.
The logical outcome of this is that the general public will eventually get fed up, and there will be an industry-wide crash, just like in 1983 and 2000. I suppose this is a requirement for any overly hyped technology to reach the Plateau of Productivity.
The SOW was so poorly specified that it was easy to maliciously comply with it, and it had no real acceptance tests. As a result legal didn't think IT would have a leg to stand on arguing with the vendor on the contract, and we ended up constantly re-negotiating on cost for them to make fixes just to get a codebase that never went live.
An example of how bad it was - imagine you have a database of metadata to generate downloader tasks in a tool like airflow, but instead of doing any sane groupings of say the 100 sources with 1000 files each every day into a 100ish tasks, it generated a 700,000 task graph because its gone task-per-file-per-day.
We were using some sort of SaaS dag/scheduler tool at the time and if we deployed we'd have been using 5x more tasks than the entire decades-old, 200 person person were using to date, and paid for it.
Or they implemented the file arrival SLA checker such that it only alerted when a late file arrived. So if a file never arrives it never alerts. Or when a daily file arrives a week late, you get the alert on arrival, not a week ago when it was late.
To be fair though, in your case it aounds like 51% (and maybe even 75+%) of the defect was in the specifications.
You can have a loose spec and trust the team to do the right thing if it's an internal team you will allocate budget/time to iterate. Not if you have a fixed time & cost contract.
Was it not possible to sees the quality issues before the project was finished?
Eventually you will find yourself on deep waters, with the ship lower than it should be, routinely taking out buckets of water, whishing for the nearest island, only to repair ship with whatever is on that island, and keep sailing to the nearest one, with the buckets ready.
After a couple of enterprise projects, one learns it is either move into another business, or learn to cope with this approach.
Which might be specially trick given the job landscape on someone's region.
Before SE I had a bunch of vastly different jobs and they all suffered from something akin to crab bucket mentality where doing a good job was something you got away with.
I've had jobs where doing the right thing was something you kept to yourself or suffer for it.
I wish I could make $$$ off this insight somehow but im not sure it's possible.
Source: I've been replaced by this process a number of times.
I don't see how this would be causally linked to capitalism in any meaningful way.
The contractors simply wanted to get paid, naturally. The people who paid them didn't understand the original codebase, and they did not communicate with the people who designed and built the original codebase either. The people who built the original code were overworked and saw the whole bruhaha as a burden over which they had no control.
It was a low seven figure contract. The feature was scrapped after two or three years while the original product lived on and evolved for many years after that.
I hope that management learned their lesson, but I doubt it.
That's how lots of the early outsourced projects ended up. Perfectly matching the spec and not working.
> The whole thing was scrapped immediately.
And that's how it ended up too. everything old is new again.
That said, a lazy contribution - substandard code or poorly LLM generated - just wastes your time if your feedback is just put into the LLM again. Setting boundaries then is perfectly acceptable, but this isn't unique to LLMs.
You might make this easier by saying you just checked their code with your own AI system and then say it returned "you obviously didn't write it, please redo".
You give up, approve the trash PRs, wait for it to blow up in production and let the company reap the rewards of their AI-augmented workforce, all while quietly looking for a different job or career altogether.
What works for me is that after having lots of passing tests, I start refactoring the tests to get closer to property testing: basically prove that the code works by allowing it to go through complex scenarios and check that the state is good in every step instead of just testing lots of independent cases. The better the test is, the harder LLMs are able to cheat.
We scoff at clever code thats hard to understand leading to poor ability for teams to maintain, but what about knowingly much lower quality code?
Much like Ikea's low cost replaceable furniture has replaced artisan, hand made furniture and cheap plastic toys have replaced finely made artifacts. LLM produced code is cheap and low effort; meant to be discarded.
In recognizing this, then it should be used where you have this in mind. You might still buy a finely made sofa because it's high touch. But maybe the bookshelves from Ikea are fine.
I see this a lot and even done so myself, I think a lot of people in the industry are a bit too socially-aware and think if they start a discussion they look like they're trying too hard.
It's stupid yes, but plenty of times I've started discussions only to be brushed off or not even replied to, and I believed it was because my responses were too long and nobody actually cared.
But then, for me, writing is a way to organize thought as well, plus these remarks will stay in the thread for future reference. In theory anyway, in practice it's likely they'll switch from Gitlab to something else and all comments will be lost forever.
Which makes me wish for systems that archive review remarks into Git somehow. I'm sure they exist, but they're not commonly used.
It is likely not possible to completely forbid junior developers from using AI tools, but any pull request that they create that contains (AI-generated) code that they don't fully comprehend (they can google) will be rejected (to test this, simply ask them some non-trivial questions about the code). If they do so, again, these junior developers deserve a (small) tantrum.
So we can ask everyone using these tools to understand the code before submitting a PR, but that's the best we can do. There's no need to call anyone out for not meeting some invisible standard of quality.
I didn't expect this initially but I am seeing it a ton at work now and it is infuriating. Some big change lands in my lap to review and it has a bunch of issues but they can ultimately be worked out. Then kaboom it is an entirely different change that I need to review from scratch. Usually the second review is just focused on the edits that fixed the comments from my first review. But now we have to start all over.
If LLMs will be able to write unit tests, this will get worse, because there will be no time spent reflecting about "what do I need" or "how can this be simplified". These are, in my opinion, how to characterize the differences between a Developer, Engineer, and Architect mindset. And LLMs / vibe coding will never develop actual engineers or architects, because they never can develop that mindset.
The easiest programming language to spot those architectural mistakes in is coincidentially the one with the least syntax burden. In Go it's pretty easy to discover these types of issues in reviews because you can check the integrated unit tests, which help a lot in narrowing down the complexities of code branches (and whether or not a branch was reached, for example).
In my opinion we need better testing/review methodologies. Fuzz testing, unit testing and integration testing isn't enough.
We need some kind of logical inference tests which can prove that code branches are kept and called, and allow to confirm satisfiabilities.
It's funny, I have the same problem, but with subject matter expertise. I work with internal PR people and they clearly have shifted their writing efforts to be AI-assisted or even AI-driven. Now I as the SME get these AI-written blog posts and press releases and I spend a far more time on getting all the hallucinations out of these texts.
It's an effort inversion, too - time spent correcting the PR-people's errors has tripled or quadrupled. They're supposed to assist me, not the other way around. I'm not the press release writer here.
And of course they don't 'learn' like your junior engineers - it's always AI, it's always different hallucinations.
P.S.: And yes I've raised this internally with our leadership - at this rate we'll have 50% of the PR people next year, they're making themselves unemployed. I don't need a middleman who's job it is to copy-paste my email into ChatGPT, then send me the output; I can do that myself.
Of course this is impossible to enforce, and I believe that the PR people would rather hide their AI usage. (As I wrote above why pay high salaries to people who automate themselves away?)
So then you see where this is going.
Edit: actually, that's the story of my life. I've been working for 20 years and every 5 years or so, stuff gets reshuffled so I have 3 more jobs instead of 1. It feels like I have 20 jobs by now, but still the same salary. And yes I've switched employers and even industries. I guess the key is to survive at the end of the funneling.
"""
This is the new adder feature. Internally it uses chained Adders to multiply:
Adder(Adder(Adder(x, y), y), ...)
"""
class Adder:
# public attributes x and y
def __init__(self, x: float, y: float) -> None:
raise NotImplementedError()
def add(self) -> float:
raise NotImplementedError()
class Muliplier:
# public attributes x and y
# should perform multiplication with repeated adders
def __init__(self, x: float, y: float) -> None:
raise NotImplementedError()
def multiply(self) -> float:
raise NotImplementedError()
This is a really dumb example (frankly something Claude would write), but it illustrates that they should do this for external interfaces and implementation details.For changes, you'd do the same thing. Specify it as comments and "high level" code ("# remove this class and switch to Multiplier") etc.
Then spec -> review -> tests -> review -> code -> review.
Depending on how much you trust a dev, you can kill some review steps.
1. It's harder to vibe good specs like this from the start, and prevents Claude from being magical (e.g. executing code to make sure things work)
2. You're embedding a design process into reviews which is useful even if they're coding by hand.
3. It simplifies reviewing generated code because at least the interfaces should be respected.
This is the pattern I've been using personally to wrangle ChatGPT and Claude's behavior into submission.
My gut feeling is that it would generalize to typed languages, Go, Erlang, even Haskell etc, but maybe some of them make life easier for the reviewer in some ways? What are your thoughts on that?
Would you mind drilling down into this a bit more? I might be dealing with a similar problem and would appreciate if you have any insight
The way that you solve this is that you pull your junior into a call and work them through your comments one by one verbally, expecting them to comprehend the issues every time.
I had to think a bit about it, but when it feels off it can be something like:
- I wrote several paragraphs explaining my reasoning, expecting some follow-up questions.
- The "fix" didn't really address my concerns, making it seem like they just said "okay" without really trying to understand. (The times when the whole PR is replaced makes it seem like my review was also just forwarded to the LLM, haha)
- I'm also comparing to how I often (especially earlier in my career) thought a lot about how to solve things, and when I got constructive feedback it felt pretty rewarding - and I could often give my own reasoning for why I did things a certain way. Sometimes I had tried a bunch of the things that the reviewer suggested, leading to a more lively back-and-forth. This could just be me, of course, or a cultural thing, but my expectation also comes from how other developers I've worked with react to my reviews.
Does that make sense? I'd be interested in hearing more about the problem you're dealing with. If this is not the right place, feel free to send an email :)
This kind of thing drove me mad even before LLMs or coding - it started at school when I helped people with homework. People would insist on switching to an entirely different approach midway through explaining how to fix the first one.
My favorite LLM-generated code I've seen in PRs lately is
expect(true).toBe(true)
Look ma! Tests aren't flaky anymore!One thing I do that helps clean things up before I send a PR is writing a summary. You might consider encouraging your peers to do the same.
## What Changed?
Functional Changes:
- New service for importing data
- New async job for dealing with z.
Non-functional Changes: - Refactoring of Class X
- Removal of outdated code
It might not seem like much, but writing this summary forces you to read through all the changes and reflect. You often catch outdated comments, dead functions left after extractions, or other things that can be improved—before asking a colleague to review it.It also makes the reviewer’s life easier, because even before they look at the code, they already know what to expect.
PRs in general shouldn't require elaborate summaries. That's what commit messages are for. If the PR includes many commits where a summary might help, then that might be a sign that there should be multiple PRs.
In other words, we need to code review the same way we interact with LLMs - point to the overarching flaw and request a reroll.
Another thing I do is ask for the claude session log file. The inputs and thought they provided to claude give me a lot more insight than the output of claude. Quite often I am able to correct the thought process when I know how they are thinking. I've found junior developers treat claude like a sms - small ambiguous messages with very little context, hoping it would perform magic. By reviewing the claude session file, I try to fix this superficial prompting behaviour.
And third, I've realized claude works best of the code itself is structured well and has tests, tools to debug and documentation. So I spend more time on tooling so that claude can use these tools to investigate issues, write tests and iterate faster.
Still a far way to go, but this seems promising right now.
For junior devs, it’s about the same, I’m assigning hack jobs, because most of what we need to do are hack jobs. The code really isn’t the bottleneck in that case, the research needed to write the code is.
Who thought lazy devs were the bottleneck? The industry needs 8x as much regulation as it has now; they can do whatever they want at the moment lol.
otherwise i'm writing embedded systems. fine, LLM, you hold the scope probe and figure out why that PWM is glitching
the people who have to read your self-review will simply throw what you gave them into their own instance of the same corporate AI
at which point why not simply let the corporate AI tell you what to do as your complete job description; the AI will tell you to "please hold the scope probe as chatbotAI branding-opportunity fixes the glitches in the PWM"
I guess we pass the butter now...
They're not wrong, but they're missing the point. These bottlenecks can be reduced when there are fewer humans involved.
Somewhat cynically:
code reviews: now sometimes there's just one person involved (reviewing LLM code) instead of two (code author + reviewer)
knowledge transfer: fewer people involved means this is less of an overhead
debugging: no change, yet
coordination and communication: fewer people means less overhead
LLMs shift the workload — they don’t remove it: sure, but shifting workload onto automation reduces the people involved
Understanding code is still the hard part: not much change, yet
Teams still rely on trust and shared context: much easier when there are fewer people involved
... and so on.
"Fewer humans involved" remains a high priority goal for a lot of employers. You can never forget that.
Most of these only exist because one person cannot code fast enough to produce all the code. If one programmer was fast enough, you would not need a team and then you wouldn't have coordination and communication overhead and so on.
If the amount of code grows without bounds and is an incoherent mess, team sizes may not, in fact, actually get smaller.
One useful dimension to consider team organization is the "lone genius" to "infinite monkeys on typewriters" axis. Agile as usually practised, microservices, and other recent techniques seem to me to be addressing the "monkey on typewriters" end of the spectrum. Smalltalk and Common Lisp were built around the idea of putting amazing tools in the hands of a single or small group of devs. There are still things that address this group (e.g. it's part of Rails philosophy) but it is less prominent.
I have watched for almost 20 years employers try to solve and cheat their way around this low confidence. The result is always the same: some shitty form of pattern copy/paste, missing originality, and delivery timelines for really basic features. The reasons for this is that nobody wants to invest in training/baselines and great fear that if they do have something perceived as talent that its irreplaceable and can leave.
My current job in enterprise API management is the first time where the bottleneck is different. Clearly the bottleneck is the customer’s low confidence, as opposed to the developers, and manifests as a very slow requirements gathering process.
But I think that makes them invaluable in professional contexts. There is so much tooling we never have the time to write to improve stuff. Spend 1-2h with Claude code and you can have an admin dashboard, or some automation for something that was done manually before.
A coworker comes to me with a question about our DB content, Claude gives me a SQL query for what they need, review, copy paste to Metabase or Retool, they now don’t have to be blocked by engineering anymore. That type of things has been my motivation for mcp-front[0], I wanted my non-engs coworkers to be able to do that whole loop by themselves.
we spin up a data lake, load all your data and educate an agent on your data.
But getting it to spit out hundreds or even thousands of lines of code and then just happy path testing and shipping is insane.
I'm really concerned about software quality heading into the future.
Except... writing code is often a bottleneck. Yeah, code reviews, understanding the domain, etc, is also a bottleneck. But Cursor lets me write apps and tools in 1/20th the time it would take me in an area where I am an expert. It very much has removed my biggest bottleneck.
Once you can specify what to create, and do it well, then actually creating it is quite cheap.
However, as a software developer that often feel I'm pulled into 10 hours of meetings to argue the benefits of one 2-hour thing over the other 2-hour thing, my view is often "Lets do both and see which one comes out best". The view of less technical participants in meetings is always that development is expensive, so we must at all cost avoid developing the wrong thing.
AI can really take hat equation to the extreme. You can make ten different crappy and non-working proof-of-concept things very cheaply. Then throw them out and manually write (or adapt) the final solution just like you always did. But the hard part wasn't writing the code, it was that meeting where it was decided how it should work. But just like discussing a visual design is helped by having sketches, I think "more code" isn't necessarily bad. AI's produce sub par code very quickly. And there are good uses for that: it's a sketch tool for code.
The problem is that the business bleepheads see the thing work (badly) and just say "looks great as is, let's ship it" and now you're saddled with that crap code forever
As someone who shamefully falls more in the hobbyist camp, even when they code in the workplace, and has always wanted to cross what I perceived as a chasm, I’m curious, where did most people who code for a living learn these skills?
Great teams do take that in account and will train newcomers in what it means to be a “professional” developer. But then the question becomes, how do you find such a team? And I don’t think there is a trick here. You have to look around, follow people who seem great, try to join teams and see how it goes
Practice clearly and concisely expressing what you understand the problem to be. This could be a problem with some code, some missing knowledge, or a bad process.
Check to see whether everyone understands and agrees. If not, try to target the root of the misunderstanding and try again. Sometimes you’ll need to write a short document to make things clear. Once there is a shared understanding then people can start taking about solutions. Once everyone agrees on a solution, someone can go implement it.
Like any skill, if you practice this loop often enough and take time to reflect on what worked and what didn’t, you slowly find that you develop a facility for it.
If you look from the lenses of BigTech and corporations, yes code was not a bottleneck.
But, if you look from the perspective of startups, rigorous planning was because resources to produce features were limited, which means producing a working code was a bottleneck, because in small teams you don't have coordination overhead, idea and vision is clear for them -> to produce something they have discussed and agreed on already.
My takeaway is, when discussing broad topics like usefulness of AI/LLM, don't generalize your assumptions. Code was bottleneck for some, not for others
Sure, LLMs might create slop on novel problems, but a non-tech company that needs to "create a new CRUD route" and an accompanying form, LLMs are smart enough.
And there's no good reason why LLM's can't at least partially help with that.
Autocomplete speeds up code generation by an order of magnitude, easily, with no real downside when used by experienced devs. Vibe coding on the other hand completely replaces the programmer and causes lots of new issues.
Strongly disagree. Autocomplete thinks slower than I do, so if I want to try and take advantage of it I have to slow myself down a bunch
Instead of just writing a function, I write a line or two, wait to see what the auto complete suggests, read it, understand it, often realize it is wrong and then keep typing. Then it suggests something else, rinse, repeat
I get negative value from it and turned it off eventually. At least intellisense gives instant suggestions ...
I recently started working on a client's project where we were planning on hiring a designer to build the front-end UI. Turns out, Gemini can generate really good UIs. Now we're saving a lot of time because I don't have to wait on the designer to provide designs before I can start building. The cost savings are most welcome as well.
Coding is definitely a bottleneck because my client still needs my help to write code. In the future, non-programmers should be able to build products on their own.
I don’t think there’s enough distinction between using LLMs for frontend and backend in discussions similar to these.
Using it for things like CSS/Tailwind/UI widget layout seems like a low risk timesaver.
If I have to manually review the boilerplate after it generates then I may as well just write it myself. AI is not improving this unless you just blindly trust it without review, AND YOU SHOULDN'T
If there's a secret, silent majority of seasoned devs who are just quietly trying to weather this, I wish they would speak up
But I guess just getting those paycheques is too comfy
IMO expectations are now so high from users that you need to create websites, apps, auth, payment integration, customer supoort forums and chats. And this is to break the ice and have a good footing for the business to move forward. You could see how this is a problem for a non technical person. Nobody will hire someone to do all that as it will be prohibitively expensive. AI is not for the engineers, it is a “good enough” for folks that do not understand the code.
A lot depends on where the money will be invested, and what will consumers like as well. I bet the current wave of ai coding will morph into other spheres to try and improve efficiency.
There are two ways forward:
- Those of us that have been vibing revert to having LLMs generate code in small bits that our brains are fast enough to analyze and process, but LLMs are increasingly optimized to create code that is better and better, making it seem like this is a poor use of time, since “LLMs will just rewrite it in a few months.”
- We just have a hell of a time, in a bad way, some of us losing our jobs, because the code looks well-thought out but wasn’t, at ever increasing scale.
I have wavered over the past months in my attitude after having used it to much success in some cases and having gotten in over my head in beautiful crap in the more important ones.
I have (too) many years of experience, and have existed on a combination of good enough, clear enough code with consideration for the future along with a decent level of understanding, trust in people, and distrust in scenarios.
But this situation is flogging what remains of me. Developers are being mentored by something that cannot mentor and yet it does, and there is no need for me, not in a way that matters to them.
I believe that I’ll be fired, and when I am, I may take one or both of two roads:
1. I’ll continue to use LLMs on my own hoping that something will be created that feeds my family and pays the bills, eventually taking another job where I get fired again, because my mind isn’t what it was.
2. I do one of the few manual labor jobs that require no reasoning and are accepting of a slow and unreliable neurodivergent, if there are any; I don’t think there truly are.
I’ve been close to #2 before. I learned that almost everything that is dear to you relies on your functioning a certain way. I believe that I can depend on God to be there for me, but beyond that, I know that it’s on me. I’m responsible for what I can do.
LLMs and those AIs that come after them to do the same- they can’t fill the hole in others’ lives the way that you can, even if you’re a piece of shit like I am.
So, maybe LLMs write puzzling code as they puzzle out our inane desires and needs. Maybe we lose our jobs. Maybe we hobble along slowly creating decent code. It doesn’t matter. What matters is that you be you and be your best, and support others.
I think one very important aspect is requirements collection and definition. This includes communicating with the business users, trying to understand their issues and needs that the software is supposed to address. And validating if the actual software is actually solving it or not (sufficiently).
All of this requires human domain knowledge, communication and coordination skills.
This narrative is not new. Many times I've seen decisions were made on the basis "does it require writing any code or not". But I agree with the sentiment, the problem is not the code itself but the cost of ownership of this code: how it is tested, where it is deployed, how it is monitored, by whom it's maintained etc.
If you can use tools to increase individual developer productivity (let's say all else being equal, code outputs 2x as fast) in a way where you can cut the team size in half, you'll likely a significant productivity benefit since your communication overhead has gone down in the process.
This is of course assuming a frictionless ideal gas at STP where the tool you're looking at is a straight force multiplier.
And, cynically, I bet a software LLM will be more responsive to your feedback than the over-educated and overpaid junior “engineer” will be. Actually I take it back, I don’t think this take is cynical at all.
I see it as a sign of how bad juniors are, and the need of seniors interacting with LLM directly without the middlemen.
A human’s ability to assess, interrogate, compare, research, and develop intuition are all skills that are entirely independent of the coding tool. Those skills are developed through project work, delivering meaningful stuff to someone who cares enough to use it and give feedback (eg customers), making things go whoosh in production, etc etc.
This is a XY problem and the real Y are galaxy brains submitting unvalidated and shoddy work that make good outcomes harder rather than easier to reach.
Jr Devs are responding to incentives to learn how to LLM, which we are saying all coders need to.
So now we have to torture the argument to create a carve out for junior devs - THEY need to learn critical thinking and taking responsibility.
Using an LLM directly reduces your understanding of whatever you used it write, so you can't have both - learning how to code, and making sure your skills are future proof.
There’s no carve out. Anyone pushing thoughtless junk in a PR for someone else to review is eschewing responsibility.
If programmers spend 90%+ of their time reading code rather than writing it, then LLM-generated code is optimizing only a small amount of the total work of programming. That seems to be similar to the point this blog is making.
[1] https://www.goodreads.com/quotes/835238-indeed-the-ratio-of-...
Context is never close at hand, it is scattered all over the place defeating the purpose.
People used to resist reading machine generated output. Look at the code generator / source code / compiler, not at the machine code / tables / xml it produces.
That resistance hasn't gone anywhere. Noone wants to read 20k lines of generated C++ nonsense that gcc begrudgingly accepted, so they won't read it. Excitingly the code generator is no longer deterministic, and the 'source code prompt' isn't written down, so really what we've got is rapidly increasing piles of ascii-encoded-binaries accumulating in source control. Until we give up on git anyway.
It's a decently exciting time to be in software.
Using Claude Code to first write specs, then break it down into cards, build glossaries, design blueprints, and finally write code, is just a perfect fit for someone like me.
I know the fundamentals of programming, but since 1978 I've written in so many languages that the syntax now gets in the way, I just want to write code that does what I want, and LLM's are beyond amazing at that.
I'm building API's and implementing things I'd never dreamed of spending time on learning, and I can focus on what I really want, design, optimisation, simplification, and outcomes.
LLM's are amazing for me.
Often giving 90% of what you need.
But those junior devs…
I am the pointy end of the spear.
How are you supposed to understand code if you don't at least read it and fail a bit?
I'll continue using Sonnet4 for frontend personally, it always had been a pain point in the team and I ended up being the most knowledgeable on it. Unless it's a new code architecture, I will understand what was changed and why, so I have confidence I can handle rapid iteration of code on it, but my coworkers who already struggled with our design will probably have even more struggles.
Sadly I think in the end our code will be worse, but we are a team of 5 doing the work of a team of 8, so any help is welcome (we used to do the work of 15 but our 10x developper sadly (for us) got caught being his excellent self by the CTO and now handle a new project. Hopefully with executive-level pay)
“Where is the customer entity saved to the database?”
Coordination, communication, etc… honestly not that big of a deal if you have the right people. If you are working with the wrong people coordination and communication will never be great no matter what “coordination tools you bring in”. If you have a good team, we can be sending messenger pigeons for all o care and things will still work out.
Just my opinions.
It's something that takes time. That time is now greatly reduced. So you can try more ideas and explore problems by trying solutions quickly instead of just talking about them.
Let's also not ignore the other side of this. The need for shared understanding, knowledge transfer etc is close to zero if your team is agents and your code is the input context (where the actual code is now at the level that machine code is now: very rarely if ever looked at). That's kinda where we're heading. Software is about to get much grander, and your team is individuals working on loosely connected parts of the product. Potentially hundreds of them.
We have the technology to make the technology not suck. The real challenge is putting that developer ego into a box and digging into what drives the product's value from the customer's perspective. Yes - we know you can make the fancy javascript interaction work. But, does the customer give a single shit? Will they pay more money for this? Do we even need a web interface? Allowing developers to create cat toys to entertain themselves with is one realistic way to approach the daily cloud spend figures of Figma.
The biggest tragedy to me was learning that even an aggressive incentive model does not solve this problem. Throwing equity and gigantic salaries into the mix only seems to further complicate things. Doing software well requires at least one person who just wants to do it right regardless of specific compensation. Someone who is willing to be on all of the sales & support calls and otherwise make themselves a servant to the customer base.
Yup. The tough part of my job has always been taking the business requirements and then figuring out what the business ACTUALLY wants. Users will tell you what they want, but users are not designers and usually don't think past what they currently want right now. Give them exactly what they say they want and it will almost never give a good result. You have to navigate consequences of decisions and level-set to find the solution.
LLMs are not good at this and only seem to get worse as "improved" models find users prefer constant yes-manning. I've never had an LLM tell me my idea was flawed and that's a huge issue when writing software.
Code is not a bottleneck. Specs are. How the software is supposed to work, down to minuscule detail. Not code, not unit tests, not integration tests. Just plain English, diagrams, user stories.
Bottleneck is designing those specs and then iterating them with end users, listening to feedback, then going back and figuring out if spec could be improved (or even should). Implementing actual improvements isn't hard once you have specs.
If specs are really good -- then any sufficiently good LLM/agent should be able to one-shot the solution, all the unit tests, and all the integration tests. If it's too large to one-shot -- product specs should never be a single markdown file. Think of it more like a wiki -- with links and references. And all you have to do is implement it feature by feature.
So... coding. :P
Or I might build that myself.
I’m not sure about that: the code LLMs generate isn’t categorically worse than that written by people who no longer work here and that I can’t ask anything to either. It’s also not much better or worse than what you’d find online but has a more broad reach than my Google-fu, alongside some hallucinations. At the same time, AI doesn’t hate writing tests because it doesn’t get a choice. It doesn’t get breaks and doesn’t half ass things any more or less depending on how close to 5 PM it is.
Maybe my starting point is viewing all code as a liability and not trusting anything anyone (myself included) has written all that much, so the point doesn’t resonate with me that much. That said I have used AI to push out codebases that work, albeit that did take a testable domain and a lot of iteration.
It produces results but also rots my brain somewhat because the actual part of writing code becomes less of a mentally stimulating activity compared to requirements engineering.
It’s decisions.
Ninety five percent is all the decisions made from every person involved.
The fastest delivery I ever encountered were situations where the stakeholders were intimately familiar with the problem and made quick decisions. Only then did speed of coding affect delivery.
In large organizations, PMs are rarely POs. Every decision needs to be run up the flagpole and through committee with CYAs and delays at every step.
Decision makers are outsourcing this to LLMs now which is scary as they are supposed to be the SME.
It’s the same old same old where the generals make decisions but the sergeants (NCOs) really run the army. That’s where I feel leads/principles/staff really make out break the product. They are the fulcrum dealing with LLM from above and below.
Multimodal LLMs take it even further. I've given Claude 4 a screenshot and simply said, “There’s too much white space.” It correctly identified the issue and generated CSS fixes. That kind of feedback loop could easily become a regression test for visual/UI consistency.
This isn’t just about automating code generation—it’s about augmenting the entire development cycle, from specs to testing to visual QA.
And now instead of having to get the help or code from an actual programmer, as a non-programmer but technical person, I can generate or alter any small trivial applications I want. I'm not going to be writing an OS or doing "engineering" but if I want to write a GUI widget to display my PCs temps/etc, or alter a massive complex C++ program to have some feature I want (like adding checkpointing to llama.cpp's fine-tune training), suddenly it's trivial and takes 15 minutes. Before it'd take days if it were feasible without help at all.
I can relate :-)
Our team maintains a configuration management database for a company that has grown mostly organically from 3 to 500+ employees in ~30 years.
We don't have documented processes that would account for most of the write operations, so if we have a question, we cannot just talk to the process owner.
The next option would be to talk to the data owner, but for many of our entities, we don't have a data owner. So we look into the audit logs to see which teams often touch the data, and then we do a meeting with some senior folks from each of these teams to discuss things.
But of course, finding common meeting time slots with several senior people from several teams isn't easy, they're all busy. So that alone might delay something by a few weeks to months.
For low-stakes decisions, we often try to not go through this effort, but instead do things that are easy to roll back if they go wrong.
Once we have identified the stakeholders, have a common understanding among them, and a rough consensus on how to proceed, the actual code changes are often relatively simple in comparison.
So, I guess this falls under "overhead of coordination and communication".
My backspace and delete keys loom larger than the rest of my keyboard combined. Plodding through with meager fingers, I could always find fault faster than I could produce functionality. It was a constant, ego-depleting struggle to set aside encountered misgivings for the sake of maintaining forward progress on some feature or ticket.
Now, given defined goals and architectural vision, which I've never been short of, the activation energy for producing large chunks of 'good enough' code to move projects forward is almost zero. Even my own oversized backspace is no match for the torrent.
Again - personal variation - but I expect that I am easily 10x in both in ambition and in execution compared to a year ago.
That's now melted away. For the first time my mind feels free to think. Everything is moving almost as fast as i am thinking, Im far less bogged down in the slow parts, which the llm can do. I spend so much more time thinking, designing, architecting, etc. Distracting thoughts now turn into completed quality of life features, done on the side. I guess it rewards add, the real kind, in a way that the regular world understandably punishes.
And it must free up mental space for me because i find i can now review others prs more quickly as well. I don't use llm for this, and don't have a great explanation for what is happening here.
Anyways im not sure this is the same issue as yours, and so its interesting to think about what kinds of minds it's freeing, and what kinds its of less use to.
An LLM is an extremely useful search engine. Given access to plenty of CLI tools asking it questions about an unfamiliar code base is extremely helpful. It can read and summarize much faster than I can. I don't trust its exact understanding but I do trust it to give me a high level understanding of an architecture, call out dependencies, summarize APIs, and give me an idea of what parts of the code base are new/old etc.
Additionally, having LLMs write tests after setting up the rough testing structure and giving it a few examples massively decreases the amount of time it takes to prove my understanding of the code through the tests thereby increasing my confidence/trust in it.
Authoring has never been the bottle neck, the same way my typing speed has never been the bottle neck.
The bottle neck has been, and continues to be, code review. It was in our pitch deck 4 years ago; it's still there.
For most companies, by default, it's a process that's synchronously blocked on another human. We need to either make it async (stacking) or automate it (better, more intelligent CI), or--ideally---both.
The tools we have are outdated, and if you're a team with more than 50 eng you've already spun up a sub team (devx, dev velocity, or dev productivity) whose job is to address this. Despite that, industry wide, we've still done very little because it's a philosophically poorly understood part of the process (why do we do code review? Like seriously, in three bullet points what's the purpose - most developers realize they haven't thought that deeply here).
-functionality, does it work? And is it meeting reqs?
-bug prevention, reliability, not breaking things
-matching of system architecture and best practices for the codebase
Other ideas:
-style and readability
-learning for the junior and less so the senior probably
-checking the “code review” box off your list
1. Collaborate asynchronously on architectural approach: (simplify, avoid wheel reinvention)
2. Ask "why" questions, document answers in commits and/or comments to increase understanding
3. Share knowledge
4. Bonus: find issues/errors
There are other benefits, like building rapport, getting some recognition for especially great code.
To me code reviews are supposed to be a calm process that takes time, not a hurdle to quickly kick out of the way. Many disagree with me however, but I'm not sure what the alternative is.
Edit: people tend to say reviews are for "bug finding" and "verifying requirements". I think that's at best a bonus side effect, that's too much to ask a person merely reading the code. In my case, code reviews don't go beyond reading the code (albeit deeply, carefully). We do however have QA that is more suited for verifying overall functionality.
This really gets at the benefits you mention and keeps people aligned with them instead of feeling like code review should be rushed.
Also hi Peter! Long time :)
2. Propose a solution to the problem and get feedback.
3. Design the data model(s) and get feedback.
4. Design the system architecture and get feedback.
5. Design the software architecture and get feedback.
6. Write some code and get feedback.
7. Test the code.
8. Let people use the code.
Writing the code is only one step.
In all honesty, I expect over time intelligent agents will be used for the other steps.
But the code is based on the proposed solution, which is based on the problem statement/requirements. The usefulness of the code will only be as good as the solution, which will only be as good as the problem statement/requirements.
An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.
LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.
However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.
I think the premise is true that writing code was never the "main" bottleneck but like any power tool, when wielded by the right person, it can blow past bottlenecks.
many of these arguments,only assume the case of an inexperienced engineer blindly pumping out and merging code. I concede the problems in this case.
but put this to test with more experienced engineers. how has/does it change their workflows? the results (I've personally observed) are exponentially different.
---
> LLMs reduce the time it takes to produce code, but they haven’t changed the amount of effort required to reason about behavior, identify subtle bugs, or ensure long-term maintainability.
I have to strongly disagree here. this argument doesn't apply universally. I've actually found LLMs to make it easier to understand large swaths of code, faster. especially in larger codebases that have legacy code that no one has worked on or dared to touch. LLMs bring an element of fearlessness, which makes it easier to effect change.
If you have written about your workflow related to this outcome, appreciate if you share.
and fwiw, i'm also not alone in this observation. I can at least remember 2 times in the last month that, other colleagues have cited this exact same benefit.
e.g - a complicated algo that someone wrote 3 years ago, that's working well enough but has always had subtle bugs. over a 2 day workshop, we start first by writing a bunch of (meaningful) tests with an LLM. then ask the LLM about portions of the code and piecing together why a certain bit of logic existed or was written a certain way, add more tests to confirm working behavior, then start refactoring and changing the algo (also with an LLM).
much of this is similar to how we'd do it without LLMs. but no one has bothered to improve/change it cause the time investment & ROI didn't make sense (let alone the cognitive burden in gathering context from git logs or old timers who have nuggets of context that could be pieced together). with LLMs a lot of that friction can be reduced.
This is what I’m working on fixing at wispbit.com
There will be a 100 justifications for and against it, but in the end you are going to need junior devs.
If said junior dev has not done the work, and an LLM has helped them, you are going to lose your hair walking through the code - every single time.
So you will choose between doing the work yourself, hiring new devs, or making the environment you do your work in become predictable.
We can argue that LLMs are massive piracy monstrosities, with huge amounts of public code in them, or that they are interfering with the ability of people to learn the culture of the company. The argument doesn't matter, because the reasoning being done here is motivated reasoning.
You will not care how LLMs are kept out of the playing pen, you just care that they are out.
So incentives will be structured to ensure that is the case.
What will really be game set and match, will be when some massive disaster strikes because of bad code, and it can either directly or tangentially be linked to LLMs.
Examples:
This morning Claude Code built a browser-based app that visualizes 3.8M lines of JSON dumps of AWS infrastructure. Attention required by me: 15 minutes. Results: Reasonable for a 1-shot.
A few weeks ago I had it build me a client/server app in multiple flavors from a well defined spec: async, threaded, and select, to see which one was the most clear and easy to maintain.
A few days ago I gave it a 2K line python CLI tool and said "Build me a web interface to this CLI program". It nearly one-shotted it (probably would have if I had the playwright MCP configured).
These are all things I never would have been able to pursue without the LLM tooling in the past because I just don't have time to write the code.
There are definitely cases where the code is not the bottleneck, but those aren't the only cases.
That's the only simplification that makes sense and accounts for the different phenomena we see (solo developers doing amazing things exist, teams doing amazing things exist, amazing teachers exist, etc).
There are many ways of doing it. If you understand the problem, and see a big ball of unecessary complexity rising, you get upset.
I'd argue that they're slowly changing that as well -- you can ask an LLM to "read" code, summarize / review / criticize it. At the least, it can help accelerate onboarding onto new / unfamiliar codebases.
Introducing a lever to suddenly produce more code faster creates an imbalance in the SDLC. If our review process was already a bottleneck, now that problem is even worse! If the review bottleneck was something we could tolerate/ignore before, that's no longer the case, we need to solve for it. No, that doesn't mean let some LLM review the code and ship it. CI/CD needs to get better and smarter. As a reviewer, I don't want to be on the lookout for obscure edge cases. I want to make sure my peer solved the problem in a way that makes sense for our team. CI/CD should take care of making sure the code style aligns with our policies, that new/updated tests provide enough coverage for the new/changed functionality, and that the feature actually works.
The code expertise / shared context is another tough problem that needs solving, only highlighted by introducing a random graph of numbers generating the code. Leaning on that one engineer who has been on the team for 30 years and knows where all the deep dark secrets are was not a sustainable path even before coding agents. Having a markdown file that just says "component foo is under /foo. Run make foo to test it" was not documentation. The imbalance in the SDLC will light the fire under our collective asses to provide proper developer documentation and tooling for our codebases. I don't know what that looks like yet. Some teams are trying to have *good* markdown files that actually document where all the deep dark secrets are. These are doubly beneficial because coding agents can use those as well as your humans. But better markdown is probably a small step towards the real fix which we wont be able to live without in the near future.
Anyway, great points brought up in the article. Coding agents aren't going away, so we need to solve this imbalance in the SDLC. Fight fire with fire!
cies•7h ago
While...
> Writing Code Was Never the Bottleneck
...it was also never the job that needed to get done. We wanted to put well working functionality in the hands of users, in an extendible way (so we could add more features later without too much hassle).
If lines of code were the metric of success (like "deal value" is for sales) we would incentivize developers for lines of code written.
weego•7h ago
ctenb•7h ago
lmm•6h ago
I think the author agrees, and is arguing that LLMs don't help with that.