frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Introducing tmux-rs

https://richardscollin.github.io/tmux-rs/
142•Jtsummers•1h ago•43 comments

Poor Man's Back End-as-a-Service (BaaS), Similar to Firebase/Supabase/Pocketbase

https://github.com/zserge/pennybase
32•dcu•1h ago•14 comments

AI for Scientific Search

https://arxiv.org/abs/2507.01903
11•omarsar•1h ago•0 comments

Locality of Behaviour (2020)

https://htmx.org/essays/locality-of-behaviour/
70•jstanley•2h ago•35 comments

Tools: Code Is All You Need

https://lucumr.pocoo.org/2025/7/3/tools/
154•Bogdanp•5h ago•108 comments

Spending Too Much Money on a Coding Agent

https://allenpike.com/2025/coding-agents
57•GavinAnderegg•2d ago•43 comments

Parallelizing SHA256 Calculation on FPGA

https://www.controlpaths.com/2025/06/29/parallelizing_sha256-calculation-fpga/
13•hasheddan•1h ago•3 comments

Copper is Faster than Fiber (2017) [pdf]

https://www.arista.com/assets/data/pdf/Copper-Faster-Than-Fiber-Brief.pdf
20•tanelpoder•2d ago•5 comments

Alice's Adventures in a Differentiable Wonderland

https://arxiv.org/abs/2404.17625
79•henning•2d ago•10 comments

Peasant Railgun

https://knightsdigest.com/what-exactly-is-the-peasant-railgun-in-dd-5e/
66•cainxinth•2h ago•53 comments

Show HN: HomeBrew HN – generate personal context for content ranking

https://www.hackernews.coffee/
53•azath92•4h ago•29 comments

Fei-Fei Li: Spatial intelligence is the next frontier in AI [video]

https://www.youtube.com/watch?v=_PioN-CpOP0
196•sandslash•2d ago•91 comments

About AI Evals

https://hamel.dev/blog/posts/evals-faq/
86•TheIronYuppie•2d ago•15 comments

Kyber (YC W23) Is Hiring Enterprise BDRs

https://www.ycombinator.com/companies/kyber/jobs/F1XERLm-enterprise-business-development-representative
1•asontha•4h ago

Encoding Jake Gyllenhaal into one million checkboxes (2024)

https://ednamode.xyz/blogs/2.html
7•chilipepperhott•37m ago•0 comments

Astronomers discover 3I/ATLAS – Third interstellar object to visit Solar System

https://www.abc.net.au/news/science/2025-07-03/3i-atlas-a11pl3z-interstellar-object-in-our-solar-system/105489180
225•gammarator•13h ago•117 comments

Importance of context management in AI NPCs

https://walterfreedom.com/post.html?id=ai-context-management
24•walterfreedom•2d ago•11 comments

Where is my von Braun wheel?

https://angadh.com/wherevonbraunwheel
29•speckx•2h ago•21 comments

That XOR Trick (2020)

https://florian.github.io//xor-trick/
232•hundredwatt•2d ago•104 comments

Whole-genome ancestry of an Old Kingdom Egyptian

https://www.nature.com/articles/s41586-025-09195-5
131•A_D_E_P_T•16h ago•82 comments

Exploiting the IKKO Activebuds “AI powered” earbuds (2024)

https://blog.mgdproductions.com/ikko-activebuds/
543•ajdude•1d ago•213 comments

Head in the Clouds

https://www.commonwealmagazine.org/head-clouds
8•bryanrasmussen•3h ago•0 comments

Trans-Taiga Road (2004)

https://www.jamesbayroad.com/ttr/index.html
131•jason_pomerleau•15h ago•74 comments

Writing Code Was Never the Bottleneck

https://ordep.dev/posts/writing-code-was-never-the-bottleneck
588•phire•2d ago•297 comments

Doom Didn't Kill the Amiga (2024)

https://www.datagubbe.se/afb/
37•blakespot•2h ago•55 comments

ASCIIMoon: The moon's phase live in ASCII art

https://asciimoon.com/
247•zayat•2d ago•76 comments

Nano-engineered thermoelectrics enable scalable, compressor-free cooling

https://www.jhuapl.edu/news/news-releases/250521-apl-thermoelectrics-enable-compressor-free-cooling
101•mcswell•3d ago•56 comments

AI note takers are flooding Zoom calls as workers opt to skip meetings

https://www.washingtonpost.com/technology/2025/07/02/ai-note-takers-meetings-bots/
265•tysone•22h ago•326 comments

Gmailtail – Command-line tool to monitor Gmail messages and output them as JSON

https://github.com/c4pt0r/gmailtail
116•c4pt0r•16h ago•27 comments

Show HN: CSS generator for a high-def glass effect

https://glass3d.dev/
380•kris-kay•1d ago•99 comments
Open in hackernews

Writing Code Was Never the Bottleneck

https://ordep.dev/posts/writing-code-was-never-the-bottleneck
586•phire•2d ago

Comments

cies•7h ago
Funny article, but it seems that the author did not get the "Definition of Done" memo.

While...

> Writing Code Was Never the Bottleneck

...it was also never the job that needed to get done. We wanted to put well working functionality in the hands of users, in an extendible way (so we could add more features later without too much hassle).

If lines of code were the metric of success (like "deal value" is for sales) we would incentivize developers for lines of code written.

weego•7h ago
We used to. Then we went though a phase of 'rockstar' developers who would spend their time on the fledgling social media sites musing on how their real value was measured in lines of code removed.
ctenb•7h ago
This article nowhere suggests that lines of code is something to be maximized.
lmm•6h ago
> We wanted to put well working functionality in the hands of users, in an extendible way (so we could add more features later without too much hassle).

I think the author agrees, and is arguing that LLMs don't help with that.

virgilp•7h ago
TBH, I feel like the biggest help Cursor gives me is with understanding large-ish legacy codebases. It's an excellent (& active) "rubber duck". So I'm not sure the argument holds - LLMs don't just write code.
gabrielso•7h ago
I'm not having the same positive experience on a >25yo insanely large and codebase built with questionable engineering practices
octo888•6h ago
Or is it just giving you a limited understanding but pretending it's grokked the entire codebase AND data?
zeroCalories•6h ago
Yeah I've run several SOTA tools on our gnarly legacy codebase(because I desperately need help), and the results are very disappointing. I think you can only evaluate how well a tool understands a codebase if you already know it well enough to not need it. This makes me hesitant to use it in any situation where I do need it.
IshKebab•7h ago
Was anyone claiming it is the bottleneck? Seems like a straw man.
raffael_de•7h ago
I have seen it being claimed many times; especially here on hn. Context is usually optimized coding experience with respect to keyboard options and editors (vim/emacs vs modern IDEs).
gwervc•7h ago
A few weeks ago people were discussing here how their typing speed was making them code faster. On the other hand I haven't been limited by writing code, the linked article match my professional experience.
Xss3•6h ago
You have to be fluent on the keyboard, to type without thought or 'hunting and pecking' if you want your ideas to flow from brain to pc smoothly and uninterrupted.

Speed is part of fluency and almost a shortcut to explaining the goal in real terms. Nobody is hunting and pecking at 80wpm.

bluefirebrand•5h ago
Maybe nobody is hunting and pecking at 80wpm but I am not exaggerating when I say one of the best devs I've worked with was a hunt+peck typist

The fact is that programming is not about typing lines of code into an editor

rgoulter•6h ago
I think the better question is, "do I benefit from improving the speed here, for the cost it takes".

Improving typing speed from "fast" to "faster" is very difficult. I think it's worth distinguishing between "typing faster is not useful" and "it's not worth the effort to try to type much faster".

There are sometimes cases where it's worth paying a high cost even for some marginal benefit.

ozim•6h ago
All kinds of "low code"/"no code" tools that are out there which main selling point is that you won't have to write code.

Loads of business people also think that code is some magical incantations and somehow clicking around in menu configuring stuff is somehow easier.

For a lot of people reading is hard but no one will admit that. For years I was frustrated and angry at people because I didn't understand that someone can have trouble reading while they are proper adult working business role.

I also see when I post online how people misread my comments.

noirscape•6h ago
In my experience, writing code can be a bottleneck in the sense that it's pretty easy to end up in a scenario where you're effectively "drudging" through a ton of similar LOC that you can't really optimize in any other way.

GUI libraries are a pretty good example of this; almost all the time, you're probably gonna parse the form fields in the exact same way each time, but due to how GUI libraries work, what ends up happening is that you often write multiple lines of function calls where the only difference really is the key you're using to get the variables. You can't really turn it into a function or something like that either; it's just lines of code that have to be written to make the things work and although it should be really easy to predict what you need to do (just update the variable name and the string used in the function call), it can end up wasting non-marginal time.

LLMs being able to help with this sort of thing I would however more consider to be a failure of IDEs being unable to help with it properly than anything else. This sort of task is rote, easy to predict and should even be autogeneratable. Some IDEs even let you, but it's typically hidden in a menu pretty deep in the interface, needing to be enabled by messing with their ever increasing settings menus (when it probably could just be something it can autodetect by checking the file; y'know, that's the reason why people use IDEs instead of a notepad program); it's as if at some point, IDEs changed from assisting you with making code quicker to write to only really being able to somewhat inspect and lint your codebase unless you spend hours configuring them to do otherwise. I mean, that was in part why Sublime Text and VS Code got their foot in the door, even though they have a much smaller feature list than most traditional IDEs; compared to IDEs they're lightweight (which is pretty crazy since VS Code is an Electron app) and they provide pretty much equivalent features for most people. LLMs can often predict what's going to happen next after you've written two or three of these rote lines, which is a pretty good way to get the boring stuff out of the way.

Is that worth the sheer billions of dollars thrown at AI? Almost certainly not if you look at the entire industry (its a massive bubble waiting to pop), but on the customer fees end, for now the price-to-time-saved ratio for getting rid of that rote work is easily worth it in a corporate environment. (I do expect this to change once the AI bubble pops however.)

smoothdev-bp•7h ago
I dont think the authors comments are without merit. My experience has shown me issues are usually more upfront and after the fact.

Either the bottleneck between product organizations and engineering on getting decent requirements to know what to build and engineering teams being unwilling to start until they have every I dotted and T crossed.

The backend of the problem is that already most of the code e see written is poorly documented across the spectrum. How many commit messages have we seen of "wip" for instance? Or you go to a repository and the Readme is empty?

So the real danger is the stack overflow effect on steroids. It's not just a block of code that was put in that wasn't understood, its now entire projects, and there's little to no documentation to explain what was done or why decisions were made.

mgaunard•7h ago
In my experience the difficulty in building good software is having a good vision of what the end result should look like and how to get there.

If the developer is not savvy about the business case, he cannot have that vision, and all he can do is implement requirements as described by the business, which itself doesn't sufficiently understand technology to build the right path.

nkjoep•6h ago
I tend to agree. Ideas are cheap and can be easily steered around.

The tricky part is always the action plan: how do we achieve X in steps without blowing budget/time/people/other resources?

gabrielso•7h ago
The article misses the point that LLMs are not removing the bottleneck of writing code for people who know how to write code. It's removing this bottleneck for everyone else.
CerebralCerb•7h ago
I have yet to see anyone who previously could not write code be able to do so, beyond simple scripts, with LLM's.
gabrielso•7h ago
In my experience, non-coders with LLMs can go beyond simple scripts and build non-trivial small applications nowadays, but the difference of outcomes between them and a competent coder with LLMs is still staggering.
oc1•6h ago
At least they will be more confident than ever that they can when all the LLM ever says is "You are absolutely right!" ;)
bubblyworld•6h ago
I have - somebody in my mushroom foraging group wrote an app that predicts what kinds of mushrooms you are likely to find in different spots in our area, based on weather forecasts and data he's been collecting for years. It's a dead simple frontend/backend, but it works, he built and deployed it himself and he had zero coding experience before this. Pretty impressive, from my perspective.

As a programmer I can see all the rough edges but that doesn't seem to bother the other 99% of people on the group who use it.

dankobgd•6h ago
Then human resource woman should be the only programmer
afiodorov•7h ago
Even without LLMs, we were approaching a point of saturation where software development was bottlenecked by market demand and funding, not by a shortage of code. Our tooling has become so powerful that the pure act of programming is secondary.

It's a world away from when the industry began. There's a great story from Bill Gates about a time when his ability to simply write code was an incredibly scarce resource. A company was so desperate for programmers that they hired him and Paul Allen as teenagers:

  "So, they were paying penalties... they said, 'We don’t care [that they are kids].' You know, so I go down there. You know, I’m like 16, but I look about 13. They hire us. They pay us. It’s a really amazing project... they got a kick out of how quickly I could write code."
That story is a powerful reminder of how much has changed. Writing code was the bottleneck years ago. However the core problem has shifted from "How do we build it?" to "What should we build and is there a business for it?"

Source: https://youtu.be/H1PgccykclM?si=YuIFsUcWc6sHRkAg

OtherShrezzing•7h ago
>Even without LLMs, we were approaching a point of saturation where software development was bottlenecked by market demand and funding, not by a shortage of code

I think it's credible to say that it was just market demand. Marc Andreessen's main complaint before the AI boom was that "there is more capital available than there are good ideas to fund". Personally, I think that's out of touch with reality, but he's the guy with all the money and none of the ideas, so he's a credible fist-hand source.

afiodorov•6h ago
I think the "more capital than ideas" problem is highly contextual and largely a Silicon Valley-centric view.

There is immense, unmet demand for good software in developing countries—for example, robust applications that work well on underpowered phones and low-bandwidth networks across Africa or Southeast Asia. These are real problems waiting for well-executed ideas.

The issue isn't a lack of good ideas, but a VC ecosystem that throws capital at ideas of dubious utility for saturated markets, while overlooking tangible, global needs because they don't fit a specific hyper-growth model.

aleph_minus_one•4h ago
> while overlooking tangible, global needs because they don't fit a specific hyper-growth model.

I do believe that these also fit the hyper-growth model. It's rather that these investors have a very US-centric knowledge of markets and market demands, and thus can simply barely judge ideas that target very different markets.

oytis•6h ago
If you define good idea to be limited to SaaS, then sure you'll reach saturation pretty soon. But, say, anything that involves hardware could definitely benefit from a little more funding.

Also, he's a VC, but where more funding even in pure software is needed are sustainable businesses that don't have ambition to take over the world, but rather serve their customer niche well.

netcan•5h ago
On a tangential note... This type of problem is very relevant for "impact of ai" estimates.

I think we have a tendency to overestimate efficiency... because of the central roles it plays at the margins that mattered to us at any given time. .

But the economy is bottlenecked in complex ways. Market demand, money, etc.

It's not obvious that 100X more code is something we can use.

aleph_minus_one•4h ago
> There's a great story from Bill Gates about a time when his ability to simply write code was an incredibly scarce resource.

The capability to write high-quality code and have a deep knowledge about it is still a scarce resource.

The difference from former days is rather that the industry began to care less about this.

Cthulhu_•4h ago
This is in tandem with several generations of programming language, tooling, best practices, etc. LLMs haven't suddenly increased people's productivity, improved tooling did.

Back when these tools did not exist yet, a lot of this knowledge didn't exist yet. Software now is built on the shoulders of giants. You can write a line of code and get a window in your operating system, people like Bill Gates and his generation wrote the low level graphics code and had to come up with the concept of a window first, had to invent the fundamentals of graphics programming, had to wait and interact with hardware vendors to help make it performant.

otabdeveloper4•58m ago
> Writing code was the bottleneck years ago.

No it wasn't. It never was.

afiodorov•31m ago
If you're hiring 16 year olds just because of their ability to write code sounds like you're bottlenecked by writing code. Your comment doesn't clarify why you disagree.
mgaunard•7h ago
Right, we all know this. LLMs write a lot of bad code that cannot be realistically reviewed.

I've even had code submitted to me by juniors which didn't make any sense. When I ask them why they did that, they say they don't know, the LLM did it.

What this new trend is doing is generating a lot of noise and overhead on maintenance. The only way forward, if embracing LLMs, is to use LLMs also for the reviewing and maintenance, which obviously will lead to messy spaghetti, but you now have the tools to manage that.

But the important realization is that for most businesses, quality doesn't really matter. Throwaway LLM code is good enough, and when it isn't you can just add more LLM on top until it does what you think you need.

djeastm•3h ago
>When I ask them why they did that, they say they don't know, the LLM did it.

I can't imagine a professional software developer in a position of authority leaving that statement unchallenged and uncorrected.

If a person doesn't stand behind the code they write, they shouldn't be employed. Full stop.

JonChesterfield•1h ago
> When I ask them why they did that, they say they don't know, the LLM did it.

This should resolve itself via rounds of redundancies, probably targetting the senior engineers that are complaining about the juniors, then by insolvency.

revskill•7h ago
Nan, it depends on quality of data you trained the bot.
gdiamos•7h ago
I used to think I needed to type faster.

As I get older I spend more of my coding time on walks, at the whiteboard, reading research, and running experiments

Cthulhu_•4h ago
Exactly; it's not about the volume of code, it's about the value of it. The best code is the code never written.

Reminds me of a former colleague of mine, I'd sit next to him and get frustrated because he was a two-finger typer. But, none of his code was wasted. I frequently write code, then cmd+z back to ten minutes ago or just `git checkout .` because I lost track.

marcosdumay•3h ago
Programmers don't have to type fast, but we have to type unconsciously. Training for one of those usually also trains for the other.
andrelaszlo•6h ago
My most recent example of this is mentoring young, ambitious, but inexperienced interns.

Not only did they produce about the same amount of code in a day that they used to produce in a week (or two), several other things made my work harder than before:

- During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).

- The time spent on trivial issues went down a lot, almost zero, the remaining issues were much more subtle and time-consuming to find and describe.

- Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be. This breakdown of pattern-matching compared to "organic" code made the overhead much higher. Spending decades reviewing code and answering Stack Overflow questions often makes it possible to pinpoint not just a bug but how the author got there in the first place and how to help them avoid similar things in the future.

- A simple, but bad (inefficient, wrong, illegal, ugly, ...) solution is a nice thing to discuss, but the LLM-assisted junior dev often cooks up something much more complex, which can be bad in many ways at once. The culture of slowly growing a PR from a little bit broken, thinking about design and other considerations, until its high quality and ready for a final review doesn't work the same way.

- Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.

This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves. The junior dev would feel (I assume) much more productive and competent, but the response to their work would eventually lack most of the usual enthusiasm or encouragement from senior devs.

How do people work with these issues? One thing that worked well for me initially was to always require a lot of (passing) tests but eventually these tests would suffer from many of the same problems

epolanski•6h ago
> the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be

I don't understand, if they don't test the code they write (even if manually) it's not an LLM issue, it's a process one.

They have not been taught what does it mean to have a PR ready for being reviewed, LLMs are irrelevant here.

skydhash•6h ago
Sometimes, orgs don!t mandate testing or descriptive PRs, and then you requiring it makes you look like a PITA.
Cthulhu_•4h ago
PITA or senior developer that's too senior for that company? Honestly I think an organization has no say in discussions about testing or descriptive PRs, and on the other side, a decent developer does not defer to someone higher-up to decide on the quality of their work.
skydhash•4h ago
Some managers will do anything for velocity, even if the direction is towards a cliff with with sharp rocks below. You try to produce quality work and others are doing tornado programming all over the codebase and be praised for it.
zeroCalories•6h ago
Writing a test requires you to actually know what you're trying to build, and understanding that often requires the slow cooking of a problem that an LLM robs from you. I think this is less of a problem when you've already been thinking deeply on the domain / codebase for a long time. Not true for interns, new hires, interns.
noodletheworld•6h ago
How do you test edge cases?

You think about the implementation and how it can fail. If you don’t think about the implementation, or don’t understand the implementation, I would argue that you can earnestly try to test, but you won’t do a good job of it.

The issue of LLMs here is the proliferation of people not understanding the code they produce.

Having agents or LLMs review and understand and test code may be the future, but right now they’re quite bad at it, and that means that the parent comment is spot on; what I see right now is people producing AI content and pushing the burden of verification and understanding to other people.

chii•6h ago
> pushing the burden of verification and understanding to other people.

Where was the burden prior to LLM's?

if a junior cannot prove his/her code as working and have an understanding, how was this "solved" before llm? Why can't the same methods work post-llm? Is it due to volume? If a junior produces _more_ code they don't understand, it doesn't give them the right to just skip PR/review and testing etc.

If they do, where's upper management's role here then? The senior should be bringing up this problem and work out a better process and get management buy-in.

jaapz•6h ago
Testing is often very subtle. If you don't understand changes you made (or really didn't make because the LLM did them for you), you don't know how they can subtly break other functionality that also depends on it. Even before LLM's, this was a problem for juniors, as they would change some code, it would build, it would work on their feature, but it would break something else which was seemingly unrelated. Only if you understand what your code changes actually "touch", you know what to (manually or automatically) test.

This is of course especially significant in codebases that do not have strict typing (or any typing at all).

epolanski•6h ago
> The issue of LLMs here is the proliferation of people not understanding the code they produce.

Let's ignore the code quality or code understanding: these juniors are opening PRs, according to the previous user, that simply do not meet the acceptance criteria for some desired behavior of the system.

This is a process, not tools issue.

I too have AI-native juniors (they learned to code along copilot or cursor or chatgpt) and they would never ever dare opening a PR that doesn't work or does not meet the requirements. They may miss some edge case? Sure, so do I. That's acceptable.

If OP's are, they have not been taught that they have to ask for feedback when their version of the system does what it needs to.

dakiol•6h ago
LLMs amplify the problem, so they are not that irrelevant.
andrelaszlo•5h ago
I agree, normally the process (especially of manual testing) is a cultural thing and something you instill into new devs when you get broken PRs - "please run the tests before submitting for review", or "please run the script in staging, here's the error I got: ...".

Catching this is my job, but it becomes harder if the PR actually has passing tests and just "looks" good. I'm sure we'll develop the culture around LLMs to make sure to teach new developers how to think, but since I learned coding in a pre-LLM world, perhaps I take a lot of things for granted. I always want to understand what my code does, for example - that never seemed optional before - but now it seems to get you much further than just copy-pasting stuff from Stack Overflow ever did.

stuartjohnson12•6h ago
In the medium term I think you have to shift the work upstream to show that they've put in the labour to actually design the feature or the bug fix.

I think we've always had this mental model which needs to change that senior engineers and product managers scope and design features, IC developers (including juniors for simpler work) implement them, and then senior engineers participate in code review.

Right now I can't see the value in having a junior engineer on the team who is unable to think about how certain features should be designed. The junior engineer who previously spent his time spinning tires trying to understand the codebase and all the new technologies he has to get to grips with should instead spend that time trying to figure out how that feature fits into the big picture, consider edge cases, and then propose a design for the feature.

There are many junior engineers who I wouldn't trust with that kind of work, and honestly I don't think they are employable right now.

In the short term, I think you just need to communicate this additional duty of care to make sure that your pull requests are complete because otherwise there's an asymmetry of workload and judge those interns and juniors on how respectful of that they are.

imiric•6h ago
I don't think the junior/senior distinction is useful in this case. All software engineers should care about the quality of the end product, regardless of experience. I've seen "senior" engineers doing the bare minimum, and "junior" engineers putting vastly more care into their work. Experience is something that is accrued over time, which gives you more insight into problems you might have seen before, but if there's no care about the product, then it's hardly relevant.

The issue with LLM tools is that they don't teach this. The focus is always on getting to the end result as quickly as possible, skipping any of the actually important parts of software development. The way problem solving is approached with LLMs is by feeding them back to the LLM, not by solving them yourself. This is another related issue: relying on an LLM doesn't give you software development experience. That is gained by actually solving problems yourself; understanding how the system works, finding the underlying root cause, fixing it in an elegant way that doesn't create regressions, writing robust tests to ensure it doesn't happen again, etc. This is the learning experience. LLMs can help with this, but they're often not used in this way.

bluefirebrand•2h ago
> I don't think they are employable right now

Well that sucks because that just means the pipeline for engineers to become seniors is completely broken

thisoneisreal•6h ago
I think this is going to look a lot like the same problem in education, where the answer is that we will have to spend less time consuming written artifacts as a form of evaluation. I think effective code reviews will become more continuous and require much more checking in, asking for explanations as the starting point instead of "I read all of your code and give feedback." That just won't be sustainable given the rate at which text can now be output.

AI creates the same problem for hiring too: it generates the appearance of knowledge. The problem you and I have as evaluators of that knowledge is there is no other interface to knowledge than language. In a way this is like the oldest philosophy problem in existence. Socrates spent an inordinate amount of time railing against the sophists, people concerned with language and argument rather than truth. We have his same problem, only now on an industrial scale.

To your point about tests, I think the answer is to not focus on automated tests at first (though of course you should have those eventually), but instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.

skydhash•6h ago
> instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.

There’s a reason no one does it. Because it’s inefficient. Even in recorded video format. The helpful things are tests and descriptives PRs. The former because its structure is simple enough that you can judge it, and the test run can be part of the commit. The second is for the simple fact that if you can write clearly about your solution, I can the just do a diff of what you told me and what the code is doing, which is way faster than me trying to divine both from the code.

zeroCalories•6h ago
I think asking people to explain is good, but it's not scalable. I do this in interviews when I suspect someone is cheating, and it's very easy to see when they've produced something that they don't understand. But it takes a long time to run through the code, and if we had to do that for everything because we can't trust our engineers anymore that would actually decrease productivity, not increase it.
jameshart•5h ago
Evaluating written artifacts is broken in education because the end goal of education is not the production of written artifacts - it is the production of knowledge in someone’s mind and the artifacts were only intended to see if that knowledge transfer had occurred. Now they no longer provide evidence of that. A ChatGPT written essay about the causes of the civil war is not of any value to a history professor, since he does not actually need to learn about the civil war.

But software development is about producing written artifacts. We actually need the result. We care a lot less about whether or not the developer has a particular understanding of the world. A cursor-written implementation of a login form is of use to a senior engineer because she actually wants a login form.

bluefirebrand•2h ago
> We care a lot less about whether or not the developer has a particular understanding of the world

We actually should because the developer has to maintain and extend the damned thing in the future

thisoneisreal•1h ago
I think it's both actually, and you're hitting on something I was thinking of while writing that post. I'm reading "The Perfectionists," which is about the invention of precision engineering. It had what I would consider three aspects, all of which we should care about:

1. The invention of THE CONCEPT BEHIND THE MACHINE. In our context, this is "Programming as Theory Building." Our programs represent some conception of the world that is NOT identical to the source code, much the way early precision tools embodied philosophies like interchangeability.

2. The building of the machine itself, which has to function correctly. To your point, this is one of the major things we care about, but I don't agree it's the only thing. In the code world this IS the code, to your point. When this is all we think about, though, I think you get spaghetti code bases and poorly trained developers.

3. Training apprentices in both the ideas and the craft of producing machines.

You can argue we should only care about #2, many businesses certainly incentivize thinking in that direction, but I think all 3 are important. Part of what makes coding and talking about coding tricky is that written artifacts, even the same written artifacts, express all 3 of these things and so matters get very easily confused.

SkyBelow•1h ago
This is a key difference, but I think it plays less of a role than it initially appears because growing knowledge of employees helps building better artifacts faster (and fixing them when things go wrong). Short term, the login form is desired. But long term, someone with enough knowledge to support the login form, for when the AI doesn't quite get it all right, is desired.
rr808•4h ago
>AI creates the same problem for hiring too

Leetcode Zoom calls always were marginal, now with chat AI they're virtually useless though still the norm.

aleph_minus_one•4h ago
> asking for explanations as the starting point instead of "I read all of your code and give feedback." That just won't be sustainable given the rate at which text can now be output.

I claim that this approach is sustainable.

The idea behind the "I read all of your code and give feedback." methodology is that the writer really put a lot of deep effort into making sure that the code is of great quality - and then he is expecting feedback, which is often valuable. As long as you can with some effort find out by yourself how improvements could be done, don't bother asking for someone else's time/

The problem is thus that the writers of "vibe-generated code" hardly ever put such a deep effort into the code. Thus the code is simply not worth asking feedback for.

Foreignborn•6h ago
I have a team that’s somewhat junior at a big company. We pretty much have everyone “vibe plan” significantly more than vibe code.

- you need to think through the product more, really be sure it’s as clarified as it can be. Everyone has their own process, but it looks like rubber ducking, critiquing, breaking work into phases, those into tasks, etc. (jobs to be done, business requirement docs, domain driven design planning, UX writing product lexicon docs, literally any and all artifacts)

- Prioritize setting up tooling and feedback loops (code quality tools of any and every kind, are required). this includes custom rules to help enforce anything you decided during planning. Spent time on this and life will be a lot better for everyone.

- We typically making very very detailed plans, and then the agents will “IVI” it (eg automatic linting, single test, test suite, manual evaluation).

You basically set up as many and as diverse of automatic feedback signals as you can.

—-

I will plan and document for 2-4 hours, then print a bunch of small “PRDs” that are like “1 story point” small. There’s clear definitions of done.

Doing this, I can pretty much go the gym or have meetings or whatever for 1-2 hours hands off.

—-

lionkor•5h ago
I pray for whoever has to review code you didn't bother writing
Sammi•5h ago
Software is going to be of two types:

1. Mostly written by LLMs, and only superficially reviewed by humans.

2. Written 50-50% by devs and LLMs. Reviewed to the same degree as now.

Software of type 2 will be more expensive and probably of higher quality. Type 1 software will be much much more common, as it will be cheaper. Quality will be lower, but the open question is whether it will be good enough for the use cases of cheap mass produced software. This is the question that is still unanswered by practical experience, and it's the question that all the venture capitalists a salivating about.

danaris•4h ago
I 100% guarantee you there will be plenty of software still written fully by humans—and even more that's written 95% by humans, with minor LLM-based code autocomplete or boilerplate generation.
Foreignborn•4h ago
Everyone is responsible for what they deliver. No one is shipping gluttonous CLs, because no one would review them. You still have to know and defend your work.

Not sure what to tell you otherwise. The code is much more thought through, with more tests, and better docs. There’s even entire workflows for the CI portion and review.

I would look at workflows like this as augmentation than automation.

stpedgwdgfhgdd•5h ago
“We typically making very very detailed plans” - this is writing code in English without tests. Admittedly, since generating code is faster, you get faster feedback. Still, I do not think it as efficient as an incremental, test driven approach. Here you can optimize early on for the feedback loop.
Cthulhu_•5h ago
You get faster feedback in code, but you won't know if it actually does what it's supposed to do until it's in production. I don't believe (but have no numbers) LLMs speed up the feedback loop.
imiric•6h ago
Well said. That has been my experience as well, but from the perspective of using these tools on my own. Sure, I can now generate thousands of lines of code relatively quickly, but the hard part is actually reviewing the code to ensure that it does what I asked, fix bugs, hunt for security issues, refactor, simplify and remove code, and so on. I've found that it's often much more productive to write the code myself, and rely on the LLM for simple autocomplete tasks on the way. I imagine that this workflow would be much harder when you have to communicate with a less experienced human who will in turn need to translate it to an LLM, because of the additional layers of indirection.

I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether, or they never cared to do them in the first place. Then the burden for maintaining code quality is on the few who actually care, which has now grown much larger because of the amount of code that's thrown at them. Unfortunately, these people are often seen as pedants and sticklers who block PRs for no good reason. That sometimes does happen, but most of the time, these are the folks who actually care about the product shipped to users.

I don't have a suggestion for improving this, but rather a grim outlook that it's only going to get worse. The industry will continue to be flooded by software developers trained on LLM use exclusively, and the companies who build these tools will keep promoting the same marketing BS because it builds hype, and by extension, their valuation.

fhd2•6h ago
> I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether

I think that's probably true, but I think there are multiple layers here.

There's what's commonly called vibe coding, where you don't even look at the code.

Then there's what I'd call augmented coding, where you generate a good chunk of the code, but still refactor and generally try to understand it.

And then there's understanding every line of it. For this, in particular, I don't believe LLMs speed things up. You can get the LLM to _explain_ every line to you, but what I mean is to look at documentation and specs to build your understanding and test out fine grained changes to confirm it. This is something you naturally do while writing code, and unless you type comically slow, I'm not convinced it's not faster this way around. There's a very tight feedback loop when you are writing and testing code atomically. In my experience, this prevents an unreasonable amount of emergencies and makes debugging orders of magnitude faster.

I'd say the bulk of my work is either in the second or the third bucket, depending on whether it's production code, the risks involved etc.

These categories have existed before LLMs. Maybe the first two are cheaper now, but I've seen a lot of code bases that fall into them - copy pasting from examples and SO. That is, ultimately, what LLMs speed up. And I think it's OK for some software to fall into these categories. Maybe we'll see too much fall into them for a while. I think eventually, the incredibly long feedback cycles of business decisions will bite and correct this. If our industry really flies off the handle, we tend to have a nice software crisis and sort it out.

I'm optimistic that, whatever we land on eventually, generative AI will have reasonable applications in software development. I personally already see some.

AbstractH24•5h ago
If you are writing code to solve a one off task the first category is ok.

What boggles my mind is people are writing code that’s the foundation of products like that.

Maybe it’s imposter syndrome though to think it wasn’t already being done before the rise of LLMs

bluefirebrand•5h ago
> Maybe it’s imposter syndrome though to think it wasn’t already being done before the rise of LLMs

It may well have been happening before the rise of LLMs, but the volume was a lot more manageable

Now it's an unrestricted firehose of crap that there just not enough good devs to wrangle

AbstractH24•5h ago
Would be interesting to look at the real world impact of the rise of outsourcing coding to the cheapest lowest skilled overseas body shop en mass, around the 2000s. Or the impact of trash version of commodified products flooding Amazon.

The volume here is orders of magnitude greater, but that’s the closest example I can think of.

techexec22•1h ago
> Would be interesting to look at the real world impact of the rise of outsourcing coding to the cheapest lowest skilled overseas body shop en mass, around the 2000s.

Tech exec here. It is all about gamed metrics. If the board-observed metric is mean salary per tech employee, you'll get masses of people hired in india. In our case, we hire thousands in India. Only about 20% are productive, but % productive isnt the metric, so no one cares. You throw bodies at the problem and hope someone solves it. Its great for generations of overseas workers, many of whom may not have had a job otherwise. You probably have dozens of Soham Parekhs .

Western execs also like this because it inflates headcount, which is usually what exec comp is based on "i run a team of 150.." Their lieutenants also like it because they can say "i run a team of 30", as do their sub-lieutenants "i run a team of 6"

fhd2•5h ago
In my experience, it was. And if we're getting real for a moment, the vast majority of programmers gets paid by a company that is, first and foremost, interested in making more money. IMHO all technical decisions are business decisions in disguise.

Can the business afford to ship something that fails for 5% of their users? Can they afford to find out before they ship it or only after? What risks do they want to take? All business decisions. In my CTO jobs and fractional CTO work, I always focused on exposing these to the CEO. Never a "no", always a "here's what I think our options and their risks and consequences are".

If sound business decisions lead to vibe coding, then there's nothing wrong with it. It's not wrong to loose a bet where you understood the odds.

And don't worry about businesses that make uniformed bets. They can get lucky, but by and large, they will not survive against those making better informed bets. Law of averages. Just takes a while.

imiric•5h ago
I agree with your sentiment, but not with the conclusion.

Sure, technical decisions ultimately depend on a cost-benefit analysis, but the companies who follow this mentality will cut corners at every opportunity, build poor quality products, and defraud their customers. The unfortunate reality is that in the startup culture "move fast and break things" is the accepted motto. Companies can be quickly started on empty promises to attract investors, they can coast for months or years on hype and broken products, and when the company fails, they can rebrand or pivot, and do it all over again.

So making uninformed bets can still be profitable. This law of averages you mention just doesn't matter. There will always be those looking to turn a quick buck, and those who are in it for the long haul, and actually care about their product and customers. LLMs are more appealing to the former group. It's up to each software developer to choose the companies they wish to support and be associated with.

AbstractH24•5h ago
To play devils advocate for a second, the law of averages states nobody should ever found a startup. Or any business for that matter.

It’s rare that startups gain traction because they have the highest quality product and not because they have the best ability to package, position, and market it while scaling all other things needed to mane a company.

They might get acqui-hired for that reason, but rarely do they stand the test of time. And when they do, it almost always because founders stepped aside and let suits run all or most of the show.

fhd2•2h ago
Tech and product are just small components in what makes the business profitable. And often not as central as we in our industry might _like_ to believe. From my perspective, building software is the easy, the fun part. Many bets made have nothing to do with the software.

And yes, there is enshittification, there is immoral actors. The market doesn't solve these problems, if anything, it causes them.

What can solve them? I have only two ideas:

1. Regulation. To a large degree this stops some of the worst behaviour of companies, but the reality in most countries I can think of is that it's too slow, and too corrupt (not necessarily by accepting bribes, also by wanting to be "an AI hub" or stuff like that) to be truly effective.

2. Professional ethics. This appears to work reasonably well in medicine and some other fields, but I have little hope our field is going to make strides here any time soon. People who have professional ethics either learn to turn it off selectively, or burn out. If you're a shady company, as long as you have money, you will find competent developers. If you're not a shady company, you're playing with a handicap.

It's not all so black and white for sure, so I agree with you that there's _some_ power in choosing who to work for. They'll always find talent if they pay enough, but no need to make it all too easy for them.

gortok•5h ago
Developers have always loved the new and shiny. Heck, getting developers not to rewrite an application in their new favorite framework is a tough sell.

LLM “vibe coding” is another continuation of this “new hotness”, and while the more seasoned developers may have learned to avoid it, that’s not the majority view.

CEOs and C-suites have always been disconnected from the first order effects of their cost-cutting edicts, and vibe coding is no different in that regard. They see the ten dollars an hour they spend on LLMs as a bargain if they can hire a $30 an hour junior programmer instead of a $150 an hour senior programmer.

They will continue to pursue cost-cutting, and the advent of vibe coding matches exactly what they care about: software produced for a fraction of the cost.

Our problem — or the problem of the professionals - is that we have not been successful in translating the inherent problems with the CEOs approach to a change in how the C-suite operates. We have not successfully pursuaded them that higher quality software = more sales, or lower liability, or lower cost maintenance, and that partially because we as an industry have eschewed those for “move fast and break things”. Vibe coding is “Move Fast and Break Things” writ large.

aleph_minus_one•5h ago
> Heck, getting developers not to rewrite an application in their new favorite framework is a tough sell.

This depends a lot on the "programming culture" from which the respective developers come. For example, in the department where I work (in some conservative industry) it would rather be a tough sell to use a new, shiny framework because the existing ("boring") technologies that we use are a good fit for the work that needs to be done and the knowledge that exists in the team.

I rather have a feeling that in particular the culture around web development (both client- and server-side parts) is very prone to this phenomenon.

LtWorf•3h ago
In my personal experience, web development teams don't really have much to do, so they create work for themselves.
gortok•2h ago
I agree.

In the Venn diagram of the programming culture of the companies that embrace vibe coding and the companies whose developers like to rewrite applications when a new framework comes out is almost a perfect circle, however.

ffsm8•5h ago
There is also the situation in which the developer knows the tools by heart and has ownership of the codebase, hence intuitively knows exactly what has to be changed and only needs to take action.

These devs don't get any value whatsoever from LLM, because explaining it to the LLM takes longer then doing it themselves.

Personally, I feel like everything besides actually vibe coding + maybe sanity checking via a quick glance is a bad LLM application at this point in time.

Youre just inviting tech dept if you actually expect this code to be manually adjusted at a later phase. Normally, code tells a story. You should be able to understand the thought process of the developer while reading it - and if you can't, there is an issue. This pattern doesn't hold up for generated code, even if it works. If an issue pops up later, you'll just be scratching your head what this was meant to do.

And just to be clear: I don't think vibe coding is ready for current enterprise environments either - though I strongly suspect it's going to decimate our industry once tooling and development practices for this have been pioneered. The current models are already insanely good at coding if provided the correct context and prompt.

E.g. countless docs on each method defining use cases, force the LLM to backtrack through the code paths before changes to automatically determine regressions etc. Current vibe coding is basically like the original definition of a hacker: a person creating furniture with an Axe. It basically works, kinda.

SkyBelow•1h ago
>I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether, or they never cared to do them in the first place.

I think this follows a larger pattern of AI. It helps someone with enough maturity to not rely on it too blindly and enough foresight to know they still need to grow their own skills, but does well enough that those looking for an easy or quick answer is now given that tool that lets them skip doing more of the hard work. It empowers seniors (developer or senior level in unrelated fields) but traps juniors. Same as using AI to solve a math problem. Is the student verifying their own solution against the AI's, or copying and pasting while thinking they are learning by doing so (or even recognizing their aren't but not worrying about it since the AI can handle it and not realizing how this will trap them on ever harder problems in the future).

>...but rather a grim outlook that it's only going to get worse. The industry will continue to be flooded by software developers...

I somewhat agree, but even more grim, I think we are looking at this across many more fields than just software development. The way companies make use of this and the market forces at the corporate level might be different, but it is also impacting education and that alone should be enough to negatively impact other areas.

agumonkey•6h ago
Somehow interesting how this is similar to other uses of ML driven tools, like electronics engineering where solutions would be near impossible to understand for experienced engineers.
dizhn•6h ago
> - Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be.

This reminded me of a quarter million dollar software project one of my employers had contracted to a team in a different country. On the face of it - especially if you go and check by the spec sheet - everything was there but the thing was not a cohesive whole. They did not spend one second beyond the spec sheet and none of the common sense things that "follow" from the spec were there. The whole thing was scrapped immediately.

With LLMs this kind of work now basically becomes free to do and automatic.

AbstractH24•6h ago
Cheap labor with low EI is/has been what will suffer most from generative AI.
vages•5h ago
What does EI mean in this sentence? Tried looking it up and found no definition that stood out.
steveBK123•5h ago
Emotional intelligence
Sammi•5h ago
I'm expecting to see so much more poor quality software being made. We're going to be swimming in an ocean of bad software.

Good experienced devs will be able to make better software, but so many inexperienced devs will be regurgitating so much more lousy software at a pace never seen before, it's going to be overwhelming. Or as the original commenter described, they're already being overwhelmed.

bbarnett•4h ago
I'm waiting for someone to use an LLM to handle all AWS deployment, without review, and eventual bankrupcy as the result.

Even better if the accountants are using LLMs.

Or even better, hardware prototyping using LLMs with EEs barely knowing what they are doing.

So far, most software dumbassery with LLMs can at least be fixed. Fixing board layouts, or chip designs, not as easy.

belter•4h ago
AWS itself is currently polluting their online documentation with GenAI generated snippets...I can only imagine what horrors lurk on their internal code base. In a move similar to the movie War Games, maybe humans are now out of the loop, and before a final commit LLMs are deciding....
conartist6•4h ago
Yes, but some of us have seen this coming for a long time now.

I will have my word in the matter before all is said and done. While everyone is busy pivoting to AI I keep my head down and build the tools that will be needed to clean up the mess...

distalx•4h ago
Any hints on what kind of tools you're creating for the inevitable mess?
conartist6•4h ago
https://github.com/bablr-lang/

I'm building a universal DOM for code so that we should see an explosion in code whose purpose is to help clean up other code.

If you want to write code that makes changes to a tree of HTML nodes, you can pretty much write that code once and it will run in any web browser.

If you want to write code that makes a new program by changing a tree of syntax nodes, there are an incredible number of different and wholly incompatible environments for that code to run in. Transform authors are likely forced to pick one or two engines to support, and anyone who needs to run a lot of codemods will probably need to install 5-10 different execution engines.

Most people seem not to notice or care about this situation or realize that their tools are vastly underserving their potential just because we can't come up with the basic standards necessary to enable universal execution of codemod code, which also means there are drastically lower incentives to write custom codemods and lint rules than there could/should be

lsaferite•3h ago
Where does this "universal DOM for code" sit in relation to CSTs and ASTs?
conartist6•3h ago
It's an immutable btree-based format for syntax trees which contain information both abstract and concrete. Our markup language for serializing the trees is Concrete Syntax Tree Markup Language, or CSTML.
mdaniel•3h ago
Who is the consumer for the JSX noise that is happening here? https://github.com/bablr-lang/language-en-ruby/blob/550ad6fd...

As two nits, https://docs.bablr.org/reference/cstml and https://bablr.org/languages/universe/ruby are both 404, but I suspect that latter one is just falling into the same trap as many namespaces make of using a URL when they meant it as a URN

conartist6•3h ago
We're cleaning up the broken links as time goes on, but it is probably obvious to you from browsing around that some parts of the site are still very much under construction.

The JSX noise is CSTML, a data format for encoding/storing parse trees. It's our main product. E.g. a simple document might look something like `<*BooleanLiteral> 'true' </>`. It's both the concrete syntax and the semantic metadata offered as a single data stream.

The easiest way to consume a CSTML document is to print the code stored in it, e.g. `printSource(parseCSTML(document))`, which would get you `true` for my example doc. Since we store all the concrete syntax printing the tree is guaranteed to get you the exact same input program the parser saw. This means you can use this to rearrange trees of source code and then print them over the original, allowing you to implement linters, pretty-printers, or codemod engines.

These CSTML documents also contain all the information necessary to do rich presentation of the code document stored within (syntax highlighting). I'm going to release our native syntax highlighter later today hopefully!

LtWorf•4h ago
A faster command to recursively unlink files.
oytis•4h ago
There are ways to fight it though. Look at Linux kernel for instance - they have been overwhelmed with poor contributions long before LLMs. The answer is to maintain standards that put as much burden on the contributor as possible, and normalizing unapologetic "no" from reviewers.
gyesxnuibh•4h ago
Does that work as well with non-strangers who are your coworker? I'm not sure.

Also if you're organizationally changing the culture to force people to put more effort in writing the code, why are you even organizationally using LLMs...?

oytis•3h ago
> Does that work as well with non-strangers who are your coworker?

Yeah, OK, I guess you have to be a bit less unapologetic than Linux kernel maintainers in this case, but you can still shift the culture towards more careful PRs I think.

> why are you even organizationally using LLMs

Many people believe LLMs make coders more productive, and given the rapid progress of gen AI it's probably not wise to just dismiss this view. But there need to be guardrails to ensure the productivity is real and not just creating liability. We could live with weaker guardrails if we can trust that the code was in a trusted colleague's head before appearing in the repo. But if we can't, I guess stronger guardrails are the only way, aren't they?

sarchertech•3h ago
I don’t want to just dismiss the productivity increase. I feel 100% more productive on throw away POCs and maybe 20% more productive on large important code bases.

But when I actually sit down and think it through, I’ve wasted multiple days chasing down subtle bugs that I never would have introduced myself. It could very well be that there’s no productivity gain for me at all. I wouldn’t be at all surprised if the numbers showed that was the case.

But let’s say I am actually getting 20%. If this technology dramatically increases the output of juniors and mid level technical tornadoes that’s going to easily erase that 20% gain.

I’ve seen codebases that were dominated my mid level technical tornadoes and juniors, no amount of guardrails could ever fix them.

Until we are at the point where no human has to interact with code (and I’m skeptical we will ever get there short of AGI) we need automated objective guardrails for “this code is readable and maintainable”, and I’m 99.999% certain that is just impossible.

gyesxnuibh•2h ago
My point in that second question was: Is the human challenge of getting a lot of inexperienced engineers to fully understand the LLM output actually worth the time, effort and money to solve vs sticking to solving the technical problems that you're trying to make the LLM solve?

Usually organizational changes are massive efforts. But I guess hype is a hell of an inertia buster.

exe34•3h ago
> Does that work as well with non-strangers who are your coworker? I'm not sure.

I imagine if you have a say in their performance review, you might be able to set "writes code more thoughtfully" as a PIP?

bluefirebrand•2h ago
No, because that's not measurable
aleph_minus_one•2h ago
> Does that work as well with non-strangers who are your coworker? I'm not sure.

Simply hire people who score high on the Conscientiousness, but low on the Agreeableness personality trait. :-)

belter•4h ago
Please, somebody make the Is MongoDB webscale? video for LLMs...
tristramb•3h ago
And for extra credit, create it using an LLM.
LunicLynx•3h ago
Did anyone say react in the windows start menu?

Folks, we already have bad software. Everywhere.

And nobody cares.

kevindamm•1h ago
People care, it's just that they're not the ones shipping as often.
Aeolun•3h ago
Honestly, I expect LLM’s or the combination of algorithms that make them usable (Claude Code), to get better fast enough that we’ll never reach that phase. All the good devs know what the current problem with LLM assisted coding are, and a lot of them are working to mitigate and/or fix those problems.
hn_throwaway_99•2h ago
I'm showing my age, but this is almost exactly analogous to the rise of Visual Basic in the late nineties.

The promise then was similar: "non-programmers" could use a drag-and-drop, WYSIWYG editor to build applications. And, IMO, VB was actually a good product. The problem is that it attracted "developers" who were poor/inexperienced, and so VB apps developed a reputation for being incredibly janky and bad quality.

The same thing is basically happening with AI now, except it's not constrained to a single platform, but instead it's infecting the entire software ecosystem.

Henchman21•2h ago
We turned our back on VB. Do we have the collective will to turn our back on AI? If so I suspect it’ll take a catalyzing event for it to begin. My hunch tells me no, no we don’t have the will.
AnotherGoodName•1h ago
Fwiw I honestly think it was a mistake to turn our back on vb.

Yes there were a lot of crappy barely functioning programs made in it. But they were programs that wouldn’t have existed otherwise. Eg. For small businesses automating things vb was amazing and even if the program was barely functional it was better than nothing.

hnaccount_rng•27m ago
Came here looking for this comment!

I think we will need to find a way to communicate “this code is the result of serious engineering work and all tradeoffs have been thought about extensively” and “this code has been vibecoded and no one really cares”. Both sides of that spectrum have their place and absolutely will exist. But it’s dangerous to confuse the two

tstrimple•16m ago
When the Derecho hit Iowa and large parts of my area were without power for over a week we got to discover just how many of our very large enterprise processes were dependent to some degree on "toy" apps built in "toy" technologies running on PCs under people's desks. Some of it clever but all of it fragile. It's easy to be a strong technical person and scoff at their efforts. Look how easily it failed! But it also ran for years with so few issues it never rose to IT's attention before a major event literally took the entire regional company offices offline. It caused us some pain as we had to relocate PCs to buildings with sufficient backup power. But overall the effort was far smaller than building all of those apps with the "proper" tools and processes in the first place.

Large companies can be a red tape nightmare for getting anything built. The process overload will kill simple non-strategic initiatives. I can understand and appreciate less technical people who grab whatever tool they can to solve their own problems when they run into blockers like that. Even if they don't solve it in the best way possible according to experts in the field. That feels like the hacker spirit to me.

antonvs•18m ago
> Do we have the collective will to turn our back on AI?

Why do you believe we should "turn our back on AI"? Have you used it enough to realize what a useful tool it can be?

Wouldn't it make more sense to learn to turn our backs on unhelpful uses of AI?

tstrimple•26m ago
It's the exact same thing every time a technical bar is lowered and more people can participate in something. From having to manually produce your own film to having film processing readily available on demand to not needing to process film at all and everyone has a camera in their pocket. The number of people taking photos has absolutely exploded. The average quality of photos has to have fallen through the floor. But you've also got a ton of people who couldn't participate previously for one reason or another who go on to do great things with their new found capabilities.
thfuran•2h ago
>We're going to be swimming in an ocean of bad software

I think we already are. We're about to be drowning in a cesspit. The support for the broken software is going to be replaced by broken LLM agents.

p_v_doom•2h ago
> Good experienced devs will be able to make better software

I lowkey disagree. I think good experienced devs will be pressured to write worse software or be bottlenecked by having to deal with bad software. Depends on company and culture of course. But consider that you as expereinced dev now have to explain things that go completely over the head of the junior devs, and most likely the manager/PO, so you become the bottleneck, and all pressure will come down on you. You will hear all kinds of stuff like "80% there is enough" and "dont let perfect be the enemy of good" and "youre blocking the team, we have a deadline" and that will become even worse. Unless you're lucky enough to work in a place with actually good engineering culture.

imiric•1h ago
> I'm expecting to see so much more poor quality software being made. We're going to be swimming in an ocean of bad software.

That's my expectation as well.

The logical outcome of this is that the general public will eventually get fed up, and there will be an industry-wide crash, just like in 1983 and 2000. I suppose this is a requirement for any overly hyped technology to reach the Plateau of Productivity.

steveBK123•5h ago
I dealt with a 4x as expensive statement-of-work fixed price contract that was nearshored and then subbed out to a revolving cast of characters.

The SOW was so poorly specified that it was easy to maliciously comply with it, and it had no real acceptance tests. As a result legal didn't think IT would have a leg to stand on arguing with the vendor on the contract, and we ended up constantly re-negotiating on cost for them to make fixes just to get a codebase that never went live.

An example of how bad it was - imagine you have a database of metadata to generate downloader tasks in a tool like airflow, but instead of doing any sane groupings of say the 100 sources with 1000 files each every day into a 100ish tasks, it generated a 700,000 task graph because its gone task-per-file-per-day.

We were using some sort of SaaS dag/scheduler tool at the time and if we deployed we'd have been using 5x more tasks than the entire decades-old, 200 person person were using to date, and paid for it.

Or they implemented the file arrival SLA checker such that it only alerted when a late file arrived. So if a file never arrives it never alerts. Or when a daily file arrives a week late, you get the alert on arrival, not a week ago when it was late.

sokoloff•4h ago
I have seen the revolving cast of characters bit play out several times. It’s as if they hire 1 or 2 competent people and rotate them to face the client that is currently screaming the loudest.

To be fair though, in your case it aounds like 51% (and maybe even 75+%) of the defect was in the specifications.

steveBK123•2h ago
Oh yeah, 75-90% of the outcome was determined by the bad specification/contract.

You can have a loose spec and trust the team to do the right thing if it's an internal team you will allocate budget/time to iterate. Not if you have a fixed time & cost contract.

ortusdux•6m ago
Nailing the SOWs and acceptance test requirements is key. They can mean the difference between toxic dog food or mail trucks that last decades.

https://en.wikipedia.org/wiki/2007_pet_food_recalls

https://en.wikipedia.org/wiki/Grumman_LLV

MaxBarraclough•5h ago
This sounds like a tale of failed 'waterfall model' software development.

Was it not possible to sees the quality issues before the project was finished?

pjmlp•4h ago
As participant in many kinds of similar projects, lets put it this way, the crew already knows that the ship has a few holes while at the harbour, but captain decides for sailing anyway.

Eventually you will find yourself on deep waters, with the ship lower than it should be, routinely taking out buckets of water, whishing for the nearest island, only to repair ship with whatever is on that island, and keep sailing to the nearest one, with the buckets ready.

After a couple of enterprise projects, one learns it is either move into another business, or learn to cope with this approach.

Which might be specially trick given the job landscape on someone's region.

fredrikholm•4h ago
My suspicion is that all type of work is this; a universal issue where quality and forethought are at odds with quantity and good enough (where good enough trends towards worse over time).

Before SE I had a bunch of vastly different jobs and they all suffered from something akin to crab bucket mentality where doing a good job was something you got away with.

I've had jobs where doing the right thing was something you kept to yourself or suffer for it.

pydry•4h ago
This almost seems to be a weird artefact of capitalism. Ive worked on several projects which at some point became obviously doomed to almost everybody in the trenches but management/investors/owners kept believing. Perception of reality did not permeate the class divide.

I wish I could make $$$ off this insight somehow but im not sure it's possible.

breppp•4h ago
Create a revolutionary movement, take over the state and steal the money of the lower classes
disgruntledphd2•1h ago
I think this is driven more by hierarchy and power games rather than capitalism. Basically, if your superiors don't want to hear bad news, then either you'll tell them good news only or you'll be replaced by someone who will.

Source: I've been replaced by this process a number of times.

adwn•1h ago
> This almost seems to be a weird artefact of capitalism.

I don't see how this would be causally linked to capitalism in any meaningful way.

david-gpu•4h ago
I once saw something like that where there was an existing codebase and a different business unit in the company wanted to add a large new feature.

The contractors simply wanted to get paid, naturally. The people who paid them didn't understand the original codebase, and they did not communicate with the people who designed and built the original codebase either. The people who built the original code were overworked and saw the whole bruhaha as a burden over which they had no control.

It was a low seven figure contract. The feature was scrapped after two or three years while the original product lived on and evolved for many years after that.

I hope that management learned their lesson, but I doubt it.

pjc50•3h ago
The trick with waterfall is that discovering issues is deferred until the very last phases of test and user acceptance, at which point it's too late to do anything.
viraptor•2h ago
> They did not spend one second beyond the spec sheet and none of the common sense things that "follow" from the spec were there.

That's how lots of the early outsourced projects ended up. Perfectly matching the spec and not working.

> The whole thing was scrapped immediately.

And that's how it ended up too. everything old is new again.

oytis•6h ago
I guess answering "you obviously didn't write it, please redo" is not an option, because then you are the dinosaur hindering company's march towards the AI future?
lionkor•5h ago
You also are never 100% sure if they wrote it
Cthulhu_•4h ago
Honestly, I don't think it matters who wrote it; ultimately it's about the code and the product, not the individual author.

That said, a lazy contribution - substandard code or poorly LLM generated - just wastes your time if your feedback is just put into the LLM again. Setting boundaries then is perfectly acceptable, but this isn't unique to LLMs.

rambambram•3h ago
Haha, good one.

You might make this easier by saying you just checked their code with your own AI system and then say it returned "you obviously didn't write it, please redo".

Nextgrid•5h ago
> How do people work with these issues?

You give up, approve the trash PRs, wait for it to blow up in production and let the company reap the rewards of their AI-augmented workforce, all while quietly looking for a different job or career altogether.

flir•5h ago
I've found "write it, then ask the chatbot for a code review" to be a good pattern. You have to be judicious about what you accept, but it's often good at tidying things up or catching corner cases I didn't consider. Reading your comment, it occurs to me that a junior could get into a lot of trouble with this pattern.
xiphias2•5h ago
I work alone, not in teams, but use LLM (codex-1) a lot, and it's extremely helpful. I accepted that in return the code base is much lower quality than if I would have written it.

What works for me is that after having lots of passing tests, I start refactoring the tests to get closer to property testing: basically prove that the code works by allowing it to go through complex scenarios and check that the state is good in every step instead of just testing lots of independent cases. The better the test is, the harder LLMs are able to cheat.

steveBK123•5h ago
I wonder how this trade-off will age. I'm not a Mag7/Saas/SV startup tech guy, so I've tended to work on systems that are in service & maintained for upwards of 10 years. It's not unusual to see 20 year old codebases in my field.

We scoff at clever code thats hard to understand leading to poor ability for teams to maintain, but what about knowingly much lower quality code?

CharlieDigital•4h ago
When the price of building becomes low, you just toss it and build more.

Much like Ikea's low cost replaceable furniture has replaced artisan, hand made furniture and cheap plastic toys have replaced finely made artifacts. LLM produced code is cheap and low effort; meant to be discarded.

In recognizing this, then it should be used where you have this in mind. You might still buy a finely made sofa because it's high touch. But maybe the bookshelves from Ikea are fine.

stoneyhrm1•5h ago
> "good catch, I'll fix that"

I see this a lot and even done so myself, I think a lot of people in the industry are a bit too socially-aware and think if they start a discussion they look like they're trying too hard.

It's stupid yes, but plenty of times I've started discussions only to be brushed off or not even replied to, and I believed it was because my responses were too long and nobody actually cared.

jameshart•5h ago
That doesn’t sound like ‘social awareness’, it sounds like paranoia
Cthulhu_•5h ago
I feel the same way; we use Gitlab in our day to day, and often I find myself writing a long reply after fixing a code review issue, describing what I changed, resources used, etc... then hitting the "resolve" button, which collapses the comment and unless the reviewer has enabled notifications and actually reads them, I doubt they would ever see my well thought-out response.

But then, for me, writing is a way to organize thought as well, plus these remarks will stay in the thread for future reference. In theory anyway, in practice it's likely they'll switch from Gitlab to something else and all comments will be lost forever.

Which makes me wish for systems that archive review remarks into Git somehow. I'm sure they exist, but they're not commonly used.

aleph_minus_one•5h ago
Simply require from the junior developers that each pull request has to satisfy a very high standard. If they are not sure about something, they may ask, but if they send you some pull request of bad quality to review, and you find something, they deserve a (small) tantrum.

It is likely not possible to completely forbid junior developers from using AI tools, but any pull request that they create that contains (AI-generated) code that they don't fully comprehend (they can google) will be rejected (to test this, simply ask them some non-trivial questions about the code). If they do so, again, these junior developers deserve a (small) tantrum.

imiric•38m ago
The thing is that a "very high standard" is not a measurable criterion. The project can have test coverage requirements and strict linting to catch basic syntax and logic problems, but how do you enforce simplicity, correctness, robustness, or ergonomics? These are abstract concepts that are difficult to determine, even for experienced developers, so I wouldn't expect less experienced developers to consider them. A code review process is still important, with or without LLMs.

So we can ask everyone using these tools to understand the code before submitting a PR, but that's the best we can do. There's no need to call anyone out for not meeting some invisible standard of quality.

2OEH8eoCRo0•5h ago
Why isn't your first question, "how did you test this?"
andrelaszlo•3h ago
You're right, I am starting to develop that habit.
UncleMeat•4h ago
> Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.

I didn't expect this initially but I am seeing it a ton at work now and it is infuriating. Some big change lands in my lap to review and it has a bunch of issues but they can ultimately be worked out. Then kaboom it is an entirely different change that I need to review from scratch. Usually the second review is just focused on the edits that fixed the comments from my first review. But now we have to start all over.

swader999•4h ago
The human PR/code review needs to be abandoned. I'm not sure how or what will replace it. Some kind of programmatic agent review/test loop, contractual code that meets SLAs, vertical slice architecture, microservices (shudder)...
cookiengineer•4h ago
I wanted to add to your points that I think that there's a lack of understanding in architecture, which the previous generation has learned through refactoring and unit tests.

If LLMs will be able to write unit tests, this will get worse, because there will be no time spent reflecting about "what do I need" or "how can this be simplified". These are, in my opinion, how to characterize the differences between a Developer, Engineer, and Architect mindset. And LLMs / vibe coding will never develop actual engineers or architects, because they never can develop that mindset.

The easiest programming language to spot those architectural mistakes in is coincidentially the one with the least syntax burden. In Go it's pretty easy to discover these types of issues in reviews because you can check the integrated unit tests, which help a lot in narrowing down the complexities of code branches (and whether or not a branch was reached, for example).

In my opinion we need better testing/review methodologies. Fuzz testing, unit testing and integration testing isn't enough.

We need some kind of logical inference tests which can prove that code branches are kept and called, and allow to confirm satisfiabilities.

a_bonobo•4h ago
>This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves.

It's funny, I have the same problem, but with subject matter expertise. I work with internal PR people and they clearly have shifted their writing efforts to be AI-assisted or even AI-driven. Now I as the SME get these AI-written blog posts and press releases and I spend a far more time on getting all the hallucinations out of these texts.

It's an effort inversion, too - time spent correcting the PR-people's errors has tripled or quadrupled. They're supposed to assist me, not the other way around. I'm not the press release writer here.

And of course they don't 'learn' like your junior engineers - it's always AI, it's always different hallucinations.

P.S.: And yes I've raised this internally with our leadership - at this rate we'll have 50% of the PR people next year, they're making themselves unemployed. I don't need a middleman who's job it is to copy-paste my email into ChatGPT, then send me the output; I can do that myself.

dkdbejwi383•3h ago
Part of the solution is pushing back when you spot tons of obvious lazy LLM errors instead of fixing them yourself. Otherwise there's not much incentive for them to improve their effort.
LtWorf•3h ago
I tried but my boss told me to get used to it. So now I no longer review code at all.
a_bonobo•1h ago
Yes I've tried to have an internal standard for AI usage: at least the PR people have to tell us if they use AI. It completely changes how we approach editing of a text AI-written vs human-written (humans don't hallucinate citations, for a start).

Of course this is impossible to enforce, and I believe that the PR people would rather hide their AI usage. (As I wrote above why pay high salaries to people who automate themselves away?)

kevinventullo•2h ago
… I can do that myself.

So then you see where this is going.

a_bonobo•1h ago
Yep! I'll have 3 jobs, but I'll be paid for 1.

Edit: actually, that's the story of my life. I've been working for 20 years and every 5 years or so, stuff gets reshuffled so I have 3 more jobs instead of 1. It feels like I have 20 jobs by now, but still the same salary. And yes I've switched employers and even industries. I guess the key is to survive at the end of the funneling.

fny•4h ago
Have them first write a "code spec" in the repo with all the interfaces defined and comments that describe the behaviors.

    """
    This is the new adder feature. Internally it uses chained Adders to multiply:
    Adder(Adder(Adder(x, y), y), ...)
    """
    class Adder:
       # public attributes x and y
       def __init__(self, x: float, y: float) -> None:
          raise NotImplementedError()
 
       def add(self) -> float:
          raise NotImplementedError()
 
    class Muliplier:
       # public attributes x and y
       # should perform multiplication with repeated adders
       def __init__(self, x: float, y: float) -> None:
          raise NotImplementedError()
 
       def multiply(self) -> float:
          raise NotImplementedError()
This is a really dumb example (frankly something Claude would write), but it illustrates that they should do this for external interfaces and implementation details.

For changes, you'd do the same thing. Specify it as comments and "high level" code ("# remove this class and switch to Multiplier") etc.

Then spec -> review -> tests -> review -> code -> review.

Depending on how much you trust a dev, you can kill some review steps.

1. It's harder to vibe good specs like this from the start, and prevents Claude from being magical (e.g. executing code to make sure things work)

2. You're embedding a design process into reviews which is useful even if they're coding by hand.

3. It simplifies reviewing generated code because at least the interfaces should be respected.

This is the pattern I've been using personally to wrangle ChatGPT and Claude's behavior into submission.

akkad33•4h ago
Could you tell which language they were coding in?
andrelaszlo•3h ago
A mix, but a majority Ruby, with some shell scripts and Terraform.

My gut feeling is that it would generalize to typed languages, Go, Erlang, even Haskell etc, but maybe some of them make life easier for the reviewer in some ways? What are your thoughts on that?

exiguus•4h ago
Thanks for this insides. I am curious and want to know: Is it also a 'good catch, I'll fix that' when you pair program or mob? Or better, did you notice any differences in behavior and issues while pair or mob programming with juniors (instead of using pull requests)?
jedimastert•4h ago
> - During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).

Would you mind drilling down into this a bit more? I might be dealing with a similar problem and would appreciate if you have any insight

acedTrex•3h ago
Basically the juniors just ask the LLM for an explanation of what the problem is and then fix what the LLM interprets your review to be talking about.

The way that you solve this is that you pull your junior into a call and work them through your comments one by one verbally, expecting them to comprehend the issues every time.

andrelaszlo•2h ago
The "good catch" thing is something I do, too, but mostly for short review comments like "this will blow up if x is null" etc.

I had to think a bit about it, but when it feels off it can be something like:

- I wrote several paragraphs explaining my reasoning, expecting some follow-up questions.

- The "fix" didn't really address my concerns, making it seem like they just said "okay" without really trying to understand. (The times when the whole PR is replaced makes it seem like my review was also just forwarded to the LLM, haha)

- I'm also comparing to how I often (especially earlier in my career) thought a lot about how to solve things, and when I got constructive feedback it felt pretty rewarding - and I could often give my own reasoning for why I did things a certain way. Sometimes I had tried a bunch of the things that the reviewer suggested, leading to a more lively back-and-forth. This could just be me, of course, or a cultural thing, but my expectation also comes from how other developers I've worked with react to my reviews.

Does that make sense? I'd be interested in hearing more about the problem you're dealing with. If this is not the right place, feel free to send an email :)

exe34•3h ago
> - Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.

This kind of thing drove me mad even before LLMs or coding - it started at school when I helped people with homework. People would insist on switching to an entirely different approach midway through explaining how to fix the first one.

ericyd•3h ago
> always require a lot of (passing) tests

My favorite LLM-generated code I've seen in PRs lately is

    expect(true).toBe(true)
Look ma! Tests aren't flaky anymore!
acedTrex•3h ago
The standard competency markers we use to judge code have been hijacked. The new world is very low trust and very painful.
diogolsq•3h ago
Code review has become the new bottleneck, since it’s the layer that prevents sloppy AI-generated code from entering the codebase.

One thing I do that helps clean things up before I send a PR is writing a summary. You might consider encouraging your peers to do the same.

## What Changed?

Functional Changes:

      - New service for importing data

      - New async job for dealing with z.
Non-functional Changes:

       - Refactoring of Class X

        - Removal of outdated code
It might not seem like much, but writing this summary forces you to read through all the changes and reflect. You often catch outdated comments, dead functions left after extractions, or other things that can be improved—before asking a colleague to review it.

It also makes the reviewer’s life easier, because even before they look at the code, they already know what to expect.

imiric•1h ago
Ha. Almost always when I see PRs with such summaries I can assume that both the summary and the code has been AI-generated.

PRs in general shouldn't require elaborate summaries. That's what commit messages are for. If the PR includes many commits where a summary might help, then that might be a sign that there should be multiple PRs.

nurettin•2h ago
If they insist on using LLMs to generate trash, just use LLMs to do trash reviews on their code.
lubujackson•2h ago
The original approach was to be a surgeon and minimally cut the code to save the patient (the PR). You need to change your thinkong to realize the architecture of the prompt was wrong. Talk in abstractions and let them fully revise the PR, like "this should be refactored to reraise errors to the calling function" instead of pinpointing single lines.

In other words, we need to code review the same way we interact with LLMs - point to the overarching flaw and request a reroll.

moi2388•1h ago
This is exactly my experience. Plus documentation is no longer being read because the LLM already generated the code, so the juniors don’t even know what to check before handing in their PR
ksri•46m ago
Struggling with the same issues with junior developers. I've been asking for an implementation plan and iterating on it. Typical workflow is to commit the implementation plan and review it as part of a pr. It takes 2-3 iterations to get right. Then the developer asks claude code to implement the based on the markdown. I've seen good results with this.

Another thing I do is ask for the claude session log file. The inputs and thought they provided to claude give me a lot more insight than the output of claude. Quite often I am able to correct the thought process when I know how they are thinking. I've found junior developers treat claude like a sms - small ambiguous messages with very little context, hoping it would perform magic. By reviewing the claude session file, I try to fix this superficial prompting behaviour.

And third, I've realized claude works best of the code itself is structured well and has tests, tools to debug and documentation. So I spend more time on tooling so that claude can use these tools to investigate issues, write tests and iterate faster.

Still a far way to go, but this seems promising right now.

mixmastamyk•45m ago
These tools seem to work for accelerating seniors, not well for juniors. How are juniors supposed to learn if the aren’t doing?
seanmcdirmid•29m ago
I don’t give my interns green field projects, and they are usually hack jobs like get A working with B, which means they can’t really rely on LLMs to do much of the coding, and must instead must try, run the test, adjust, try again. More like junior investigators who happen to write some code I guess. I imagine this is extremely group-specific though.

For junior devs, it’s about the same, I’m assigning hack jobs, because most of what we need to do are hack jobs. The code really isn’t the bottleneck in that case, the research needed to write the code is.

oc1•6h ago
Yep, code won't matter in the future. Code isn't the bottleneck anymore and it's a good liberation for us professional developers. Now we can move on.
bluefirebrand•5h ago
Code never was the bottleneck though
kulahan•6h ago
This shouldn't be surprising to anyone in software development. Regardless of how essential your software is, you can just shit out any stupid-ass thing that vaguely works and you've finished your ticket.

Who thought lazy devs were the bottleneck? The industry needs 8x as much regulation as it has now; they can do whatever they want at the moment lol.

am17an•6h ago
One thing I despise about LLMs is transferring the cognitive load to a machine. It’s just another form of tech debt. And you have repay it pretty fast as the project grows.
kabdib•6h ago
my LLM win this year was to give the corporate AI my last year's worth of notes, emails and documents and ask it to write my self review. it did a great job. i'm never writing another one of those stupid bits of psychological torture again

otherwise i'm writing embedded systems. fine, LLM, you hold the scope probe and figure out why that PWM is glitching

williamdclt•6h ago
That's a really good idea, and would have the double-benefit that it would incentivise me to keep better track of information and communication, as well as take more notes, all of which certainly has various other benefits.
ysofunny•6h ago
but as soon as you are doing that,

the people who have to read your self-review will simply throw what you gave them into their own instance of the same corporate AI

at which point why not simply let the corporate AI tell you what to do as your complete job description; the AI will tell you to "please hold the scope probe as chatbotAI branding-opportunity fixes the glitches in the PWM"

I guess we pass the butter now...

2d8a875f-39a2-4•6h ago
The author puts the BLUF: "The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication."

They're not wrong, but they're missing the point. These bottlenecks can be reduced when there are fewer humans involved.

Somewhat cynically:

code reviews: now sometimes there's just one person involved (reviewing LLM code) instead of two (code author + reviewer)

knowledge transfer: fewer people involved means this is less of an overhead

debugging: no change, yet

coordination and communication: fewer people means less overhead

LLMs shift the workload — they don’t remove it: sure, but shifting workload onto automation reduces the people involved

Understanding code is still the hard part: not much change, yet

Teams still rely on trust and shared context: much easier when there are fewer people involved

... and so on.

"Fewer humans involved" remains a high priority goal for a lot of employers. You can never forget that.

noelwelsh•6h ago
> The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication. All of this wrapped inside the labyrinth of tickets, planning meetings, and agile rituals.

Most of these only exist because one person cannot code fast enough to produce all the code. If one programmer was fast enough, you would not need a team and then you wouldn't have coordination and communication overhead and so on.

brazzy•6h ago
That hypothetical one person would not just need to produce the code, but also understand how it fulfills the requirements. Otherwise they are unable to fix problems or make changes.

If the amount of code grows without bounds and is an incoherent mess, team sizes may not, in fact, actually get smaller.

noelwelsh•5h ago
Agreed. I don't think anyone can produce useful code without understanding what it should do.

One useful dimension to consider team organization is the "lone genius" to "infinite monkeys on typewriters" axis. Agile as usually practised, microservices, and other recent techniques seem to me to be addressing the "monkey on typewriters" end of the spectrum. Smalltalk and Common Lisp were built around the idea of putting amazing tools in the hands of a single or small group of devs. There are still things that address this group (e.g. it's part of Rails philosophy) but it is less prominent.

aitchnyu•6h ago
Has anybody previously had Gantt chart paths "non-code-1 -> code-1 -> non-code-2 =-> code-2" and transformed them into coding tasks, and taking advantage of the newfound coding speed? What did you do? I would need buy-in from people.
austin-cheney•6h ago
Writing software is like a combination of writing a short story, cleaning your room, and planning a vacation. The bottleneck is always low confidence, much like work anywhere else.

I have watched for almost 20 years employers try to solve and cheat their way around this low confidence. The result is always the same: some shitty form of pattern copy/paste, missing originality, and delivery timelines for really basic features. The reasons for this is that nobody wants to invest in training/baselines and great fear that if they do have something perceived as talent that its irreplaceable and can leave.

My current job in enterprise API management is the first time where the bottleneck is different. Clearly the bottleneck is the customer’s low confidence, as opposed to the developers, and manifests as a very slow requirements gathering process.

tropicalfruit•6h ago
bottom line is only thing that matters in the end
Schnitz•6h ago
I think a lot of teams will wrestle with the existing code review process being abused for quite a while. A lot of people are lazy or get into tech because it’s easy money. The combination of LLMs and a solid code review process means you can submit slop and not even be blamed for the results easier than ever.
threemux•6h ago
In a professional setting, I agree 100%, no notes. Where LLMs have helped me the most are actually side projects. There, writing the code is absolutely the bottleneck - I literally can't (or perhaps won't is more truthful) allocate enough time to write code for the little apps I've thought of to solve some small problem.
dgellow•5h ago
Agreed fully, if I have 1-2 hours a day with Claude code I end up the week with a personal project I can actually use. Or spend like half a weekend day to see if an idea makes sense.

But I think that makes them invaluable in professional contexts. There is so much tooling we never have the time to write to improve stuff. Spend 1-2h with Claude code and you can have an admin dashboard, or some automation for something that was done manually before.

A coworker comes to me with a question about our DB content, Claude gives me a SQL query for what they need, review, copy paste to Metabase or Retool, they now don’t have to be blocked by engineering anymore. That type of things has been my motivation for mcp-front[0], I wanted my non-engs coworkers to be able to do that whole loop by themselves.

[0] https://github.com/dgellow/mcp-front

mritchie712•4h ago
we specialize in "don't ask engineering for SQL" at https://www.definite.app/.

we spin up a data lake, load all your data and educate an agent on your data.

Cthulhu_•4h ago
A fair point; at this point in my career, I can't just spend weeks on something, plus I know all of the non-functionals and longer-term things I should keep in mind. Even when skipping things like tests, things just cost more work.
hbn•42m ago
LLMs help me at work writing one-off scripts where I can verify they're behaving correctly. Or I'll give it a few lines of code where I don't like how they read and ask it if there's a cleaner way to write it, i.e. if there's maybe an API or method on a class that I'm forgetting/didn't know about, and I can understand its suggestion for a rewrite.

But getting it to spit out hundreds or even thousands of lines of code and then just happy path testing and shipping is insane.

I'm really concerned about software quality heading into the future.

khazhoux•6h ago
Yet another article trying to take away from the impact of LLMs. This one is more subtle than most, but still the message is "this problem that was solved, was never actually the problem."

Except... writing code is often a bottleneck. Yeah, code reviews, understanding the domain, etc, is also a bottleneck. But Cursor lets me write apps and tools in 1/20th the time it would take me in an area where I am an expert. It very much has removed my biggest bottleneck.

Aeglaecia•4h ago
I feel like the author gave a pretty balanced take by recognising multiple ends of the equation ... do you yourself recognise that such the speedup described in your perspective is contingent on the environment that you have applied llms to? regarding frontend , wysiwyg , this is an environment where edge cases are deprioritized and llms thus excel. on the other hand working in an environment reliant on non publicly available technical documentation , llms are borderline useless. and in an environment where edge cases are paramount , llms actively cause harm , as described elsewhere in the thread. these three environments are concurrently true , they do not detract from each other ..
albertojacini•6h ago
The title says it all
alkonaut•6h ago
I agree with most of this. Writing code is one of the easy bits of Software Development. Writing the specifications about what to write is hard.

Once you can specify what to create, and do it well, then actually creating it is quite cheap.

However, as a software developer that often feel I'm pulled into 10 hours of meetings to argue the benefits of one 2-hour thing over the other 2-hour thing, my view is often "Lets do both and see which one comes out best". The view of less technical participants in meetings is always that development is expensive, so we must at all cost avoid developing the wrong thing.

AI can really take hat equation to the extreme. You can make ten different crappy and non-working proof-of-concept things very cheaply. Then throw them out and manually write (or adapt) the final solution just like you always did. But the hard part wasn't writing the code, it was that meeting where it was decided how it should work. But just like discussing a visual design is helped by having sketches, I think "more code" isn't necessarily bad. AI's produce sub par code very quickly. And there are good uses for that: it's a sketch tool for code.

bluefirebrand•5h ago
> AI's produce sub par code very quickly. And there are good uses for that: it's a sketch tool for code

The problem is that the business bleepheads see the thing work (badly) and just say "looks great as is, let's ship it" and now you're saddled with that crap code forever

padjo•6h ago
I saw one tech company say they’re going to measure the impact of AI tools by counting merged pull requests per engineer. Seems like I great recipe for AI bullshit churn counting as positive impact.
AbstractH24•6h ago
The difference between a hobbyist who codes and a professional is all of the things listed in this article.

As someone who shamefully falls more in the hobbyist camp, even when they code in the workplace, and has always wanted to cross what I perceived as a chasm, I’m curious, where did most people who code for a living learn these skills?

tigroferoce•5h ago
years of experience and iterations
dgellow•5h ago
To answer the “where”, the response is in a workplace environment. Some people seem to be able to develop that set of skills by joining serious open source projects. But really, you have to learn that on the spot.

Great teams do take that in account and will train newcomers in what it means to be a “professional” developer. But then the question becomes, how do you find such a team? And I don’t think there is a trick here. You have to look around, follow people who seem great, try to join teams and see how it goes

mreid•5h ago
A lot of those skills come from thinking about development in a team as a system and ask where do things frequently go wrong or take too long?

Practice clearly and concisely expressing what you understand the problem to be. This could be a problem with some code, some missing knowledge, or a bad process.

Check to see whether everyone understands and agrees. If not, try to target the root of the misunderstanding and try again. Sometimes you’ll need to write a short document to make things clear. Once there is a shared understanding then people can start taking about solutions. Once everyone agrees on a solution, someone can go implement it.

Like any skill, if you practice this loop often enough and take time to reflect on what worked and what didn’t, you slowly find that you develop a facility for it.

throwaw12•5h ago
I will disagree with the author.

If you look from the lenses of BigTech and corporations, yes code was not a bottleneck.

But, if you look from the perspective of startups, rigorous planning was because resources to produce features were limited, which means producing a working code was a bottleneck, because in small teams you don't have coordination overhead, idea and vision is clear for them -> to produce something they have discussed and agreed on already.

My takeaway is, when discussing broad topics like usefulness of AI/LLM, don't generalize your assumptions. Code was bottleneck for some, not for others

Sammi•5h ago
What I've seen is exactly this, that LLMs give the most leverage to small and highly capable teams of devs. You need to be highly capable in order to get good output from LLMs, and large teams still have the coordination overhead that slows them down. LLMs supercharge the small teams that were already good.
zhobbs•1h ago
I think another generalization people make here is around complexity. Many developers work on apps that just aren't that complex. Glorified CMS's mostly doing CRUD with well established code patterns.

Sure, LLMs might create slop on novel problems, but a non-tech company that needs to "create a new CRUD route" and an accompanying form, LLMs are smart enough.

lokar•1h ago
I agree. I spent most of my career on complex distributed infrastructure. I spent most of my time reading and thinking, not coding.
aosmith•5h ago
This resonates a little, it's problems that you need to consider... When I quit smoking I noticed my code quality dropped. It wasn't because I missed the cigarettes, it was the mental break with a solid social excuse I was missing. I started taking smoke breaks, without the smoke and things returned to normal.
worldsayshi•5h ago
I think the bottleneck can be summarized as verification and understanding. While that was the bottleneck before as well now it makes even more sense to find comprehensive ways to work with that. If you can quickly verify that the code is doing the right thing and that it is understandable, then you might achieve productivity increase.

And there's no good reason why LLM's can't at least partially help with that.

lionkor•5h ago
The difference between AI as autocomplete and vibe coding couldn't be bigger. It's like the difference between having your phone with you on a trip somewhere to take pictures with, and just watching a video of the place on your phone at home.

Autocomplete speeds up code generation by an order of magnitude, easily, with no real downside when used by experienced devs. Vibe coding on the other hand completely replaces the programmer and causes lots of new issues.

bluefirebrand•5h ago
> Autocomplete speeds up code generation by an order of magnitude, easily, with no real downside when used by experienced devs

Strongly disagree. Autocomplete thinks slower than I do, so if I want to try and take advantage of it I have to slow myself down a bunch

Instead of just writing a function, I write a line or two, wait to see what the auto complete suggests, read it, understand it, often realize it is wrong and then keep typing. Then it suggests something else, rinse, repeat

I get negative value from it and turned it off eventually. At least intellisense gives instant suggestions ...

z3t4•5h ago
When I learned coding it took a lot of effort just to get something to work at all, it took many years until I could take an idea and write code that works right away after the spelling errors have been fixed. Now I have colleges that have no idea what they're doing but AI gives them code that works... Meanwhile the coding standards, languages and frameworks changes faster then I have time to keep up. I always liked code that was simple, easy to understand, and easy to change, remove and rewrite. Writing and working with such code is very satisfying. But noone cares about code anyway. It's more of an brutalist abstract artform that very few people appreciate.
brokegrammer•5h ago
For me, writing CSS and coming up with professional looking designs were huge bottlenecks. Now I delegate those tasks to LLMs.

I recently started working on a client's project where we were planning on hiring a designer to build the front-end UI. Turns out, Gemini can generate really good UIs. Now we're saving a lot of time because I don't have to wait on the designer to provide designs before I can start building. The cost savings are most welcome as well.

Coding is definitely a bottleneck because my client still needs my help to write code. In the future, non-programmers should be able to build products on their own.

hn_throw2025•5h ago
This is something that’s been in my mind too.

I don’t think there’s enough distinction between using LLMs for frontend and backend in discussions similar to these.

Using it for things like CSS/Tailwind/UI widget layout seems like a low risk timesaver.

dmezzetti•5h ago
There has long been ways to reduce writing boilerplate code with IDEs. AI code generation is just another tool and it will help enable competent people.
bluefirebrand•5h ago
Not unless it is deterministic

If I have to manually review the boilerplate after it generates then I may as well just write it myself. AI is not improving this unless you just blindly trust it without review, AND YOU SHOULDN'T

dmezzetti•4h ago
I'm not sure many seasoned developers are really using AI that much in their workflows.
bluefirebrand•4h ago
If they aren't I wish they would speak up more and push back against management more

If there's a secret, silent majority of seasoned devs who are just quietly trying to weather this, I wish they would speak up

But I guess just getting those paycheques is too comfy

dmezzetti•4h ago
Software managers have long pushed "productivity" tools on developers. Most don't stick and this is likely similar. It's best to hire smart people and let them use whatever stack works best for them.
sublimefire•5h ago
A bit more interesting is slightly inverse to this. What will win in the next 10 years?

IMO expectations are now so high from users that you need to create websites, apps, auth, payment integration, customer supoort forums and chats. And this is to break the ice and have a good footing for the business to move forward. You could see how this is a problem for a non technical person. Nobody will hire someone to do all that as it will be prohibitively expensive. AI is not for the engineers, it is a “good enough” for folks that do not understand the code.

A lot depends on where the money will be invested, and what will consumers like as well. I bet the current wave of ai coding will morph into other spheres to try and improve efficiency.

bGl2YW5j•4h ago
I can’t get over the idea that I won’t ever trust my data to a product made entirely by AI; one with no or limited human oversight.
throwaway63783•5h ago
I woke up saying “no” into my pillow over and over again this morning about this problem.

There are two ways forward:

- Those of us that have been vibing revert to having LLMs generate code in small bits that our brains are fast enough to analyze and process, but LLMs are increasingly optimized to create code that is better and better, making it seem like this is a poor use of time, since “LLMs will just rewrite it in a few months.”

- We just have a hell of a time, in a bad way, some of us losing our jobs, because the code looks well-thought out but wasn’t, at ever increasing scale.

I have wavered over the past months in my attitude after having used it to much success in some cases and having gotten in over my head in beautiful crap in the more important ones.

I have (too) many years of experience, and have existed on a combination of good enough, clear enough code with consideration for the future along with a decent level of understanding, trust in people, and distrust in scenarios.

But this situation is flogging what remains of me. Developers are being mentored by something that cannot mentor and yet it does, and there is no need for me, not in a way that matters to them.

I believe that I’ll be fired, and when I am, I may take one or both of two roads:

1. I’ll continue to use LLMs on my own hoping that something will be created that feeds my family and pays the bills, eventually taking another job where I get fired again, because my mind isn’t what it was.

2. I do one of the few manual labor jobs that require no reasoning and are accepting of a slow and unreliable neurodivergent, if there are any; I don’t think there truly are.

I’ve been close to #2 before. I learned that almost everything that is dear to you relies on your functioning a certain way. I believe that I can depend on God to be there for me, but beyond that, I know that it’s on me. I’m responsible for what I can do.

LLMs and those AIs that come after them to do the same- they can’t fill the hole in others’ lives the way that you can, even if you’re a piece of shit like I am.

So, maybe LLMs write puzzling code as they puzzle out our inane desires and needs. Maybe we lose our jobs. Maybe we hobble along slowly creating decent code. It doesn’t matter. What matters is that you be you and be your best, and support others.

richx•5h ago
I work on business software.

I think one very important aspect is requirements collection and definition. This includes communicating with the business users, trying to understand their issues and needs that the software is supposed to address. And validating if the actual software is actually solving it or not (sufficiently).

All of this requires human domain knowledge, communication and coordination skills.

neoden•5h ago
> Now, with LLMs making it easy to generate working code faster than ever, a new narrative has emerged: that writing code was the bottleneck, and we’ve finally cracked it.

This narrative is not new. Many times I've seen decisions were made on the basis "does it require writing any code or not". But I agree with the sentiment, the problem is not the code itself but the cost of ownership of this code: how it is tested, where it is deployed, how it is monitored, by whom it's maintained etc.

marginalia_nu•5h ago
I think most of the supposed bottlenecks are mostly a consequence of attempting to increase development speed by throwing additional developers at the problem. They're trivially problems that don't exist for a solo dev, and there's a strong argument that a small team won't suffer much from them either.

If you can use tools to increase individual developer productivity (let's say all else being equal, code outputs 2x as fast) in a way where you can cut the team size in half, you'll likely a significant productivity benefit since your communication overhead has gone down in the process.

This is of course assuming a frictionless ideal gas at STP where the tool you're looking at is a straight force multiplier.

ogou•5h ago
My last job had a team with about 50% temp and contract. When the LLMs got popular, I could tell right away. When I reviewed their code, it was completely different than their actual style. The seniors pushed back because it was costing us more time to review and we knew it was generated. Also, they couldn't talk about about what they did in meetings. They didn't know what it was really doing. Eventually the department manager got tired of our complaining and said "it's all inevitable." Then those mercenaries started to just rubber stamp each other's PRs. The led to some colossal fuckups in production. Some of them were fired quietly, and the new people promptly started doing the same thing. Why should they care, it's just a short term contract on the way to the big payday, right?
gexla•5h ago
In addition to the poor code we spend time on, we get to lose even more time endlessly talking about it and working on ways around the issues. I swear I have spent as much time tinkering with these models and the tooling as it took for me to bring my first skills up to a level to get hired. For people who are asking me if "ChatGPT can build a web app," they're really asking if they can build this thing without learning anything. I have bad news for them...
al_borland•5h ago
The most outspoken person against LLMs on my team would bring this up a lot. Though the biggest bottleneck he identified was the politics and actually coming to agreements on spec of what to write. Even with perfect AI software engineers, this is still the issue, as someone still needs to tell the AI what to do. If no one is willing to do that, what’s the point of any of this?
whatevsmate•5h ago
Wow a lot of the stories people are writing here are super depressing. If a junior developer is delivering you a pile of code that doesn’t work, hasn’t been manually tested and verified by them, hasn’t been carefully pared down to its essential parts, and doesn’t communicate anything about itself either through code style, comments or docs … then you are already working with an LLM ; it just so happens to be hosted in - or parsed thru - a wetware interface. Critical thinking and taking responsibility for the outcome is the real job and always has been.

And, cynically, I bet a software LLM will be more responsive to your feedback than the over-educated and overpaid junior “engineer” will be. Actually I take it back, I don’t think this take is cynical at all.

raincole•4h ago
People think juniors submitting LLM-generated code to seniors to review is a sign of how bad LLM is.

I see it as a sign of how bad juniors are, and the need of seniors interacting with LLM directly without the middlemen.

m_mueller•4h ago
The main problem in this environment is IMO: how does a junior become a senior, or even a bad junior become a good junior. People aren't learning fundamentals anymore beyond what's taught, and all the rest of 'trade knowledge' is now never experienced, people just trust that the LLM has absorbed it sufficiently. Engineering is all about trade-offs. Failing to understand why from 10 possible ways of achieving something, 4 are valid contenders and possible 1-2 are best in the current scenario, and even the questions to ask to get to that answer, is what makes a senior.
whatevsmate•3h ago
The LLM is the coding tool, not the arbiter of outcome.

A human’s ability to assess, interrogate, compare, research, and develop intuition are all skills that are entirely independent of the coding tool. Those skills are developed through project work, delivering meaningful stuff to someone who cares enough to use it and give feedback (eg customers), making things go whoosh in production, etc etc.

This is a XY problem and the real Y are galaxy brains submitting unvalidated and shoddy work that make good outcomes harder rather than easier to reach.

OvbiousError•2h ago
LLMs are so easy to use though, it's addictive. Even as a senior I find myself asking LLMs stuff I know I should be looking up online instead.
whatevsmate•1h ago
I use LLMs to code. I think they’re great tools and learning the new ropes has been fun as hell. Juniors should use them too. But any claim that the LLM is responsible for garbage code being pushed into PRs is misreading the actual state of play imo.
intended•2h ago
Why should a Jr dev NOT use an LLM? Its the skill of the future, its even an underlying plank in your argument!

Jr Devs are responding to incentives to learn how to LLM, which we are saying all coders need to.

So now we have to torture the argument to create a carve out for junior devs - THEY need to learn critical thinking and taking responsibility.

Using an LLM directly reduces your understanding of whatever you used it write, so you can't have both - learning how to code, and making sure your skills are future proof.

whatevsmate•1h ago
Nothing I wrote is in counterpoint to this.

There’s no carve out. Anyone pushing thoughtless junk in a PR for someone else to review is eschewing responsibility.

dxroshan•5h ago
The author doesn't give any arguments to support his claim.
AshleysBrain•5h ago
This reminds me of the quote by Robert C. Martin[1]: "the ratio of time spent reading [code] versus writing is well over 10 to 1".

If programmers spend 90%+ of their time reading code rather than writing it, then LLM-generated code is optimizing only a small amount of the total work of programming. That seems to be similar to the point this blog is making.

[1] https://www.goodreads.com/quotes/835238-indeed-the-ratio-of-...

kgwgk•5h ago
Even worse, in some cases it may be decreasing the writing time and increasing the reading time without reducing the total work.
pragmatic•4h ago
Unfortunately, the micro-methods his clean coding style oroduce ends up doing the exact opposite.

Context is never close at hand, it is scattered all over the place defeating the purpose.

JonChesterfield•1h ago
That ratio no longer holds if people don't look at the code, they just feed it back into a new llm.

People used to resist reading machine generated output. Look at the code generator / source code / compiler, not at the machine code / tables / xml it produces.

That resistance hasn't gone anywhere. Noone wants to read 20k lines of generated C++ nonsense that gcc begrudgingly accepted, so they won't read it. Excitingly the code generator is no longer deterministic, and the 'source code prompt' isn't written down, so really what we've got is rapidly increasing piles of ascii-encoded-binaries accumulating in source control. Until we give up on git anyway.

It's a decently exciting time to be in software.

pjmlp•5h ago
That is why there is a being difference between being a sofware engineer, or software developer roles, and a plain coder, and titles carry more than words.
bobsmooth•5h ago
I'm less concerned with professionals using LLMs to code and more excited by the idea of regular people using LLMs to create programs that solve their problems.
desio•4h ago
Maybe true, but not true enough.
calrain•4h ago
I've always enjoyed software design, for me the coding was the bottleneck and it was frustrating as I had to roll through different approaches when I so clearly knew the outcome that I wanted.

Using Claude Code to first write specs, then break it down into cards, build glossaries, design blueprints, and finally write code, is just a perfect fit for someone like me.

I know the fundamentals of programming, but since 1978 I've written in so many languages that the syntax now gets in the way, I just want to write code that does what I want, and LLM's are beyond amazing at that.

I'm building API's and implementing things I'd never dreamed of spending time on learning, and I can focus on what I really want, design, optimisation, simplification, and outcomes.

LLM's are amazing for me.

pragmatic•3h ago
Agreed LLMs are fantastic autocomplete for experts.

Often giving 90% of what you need.

But those junior devs…

conartist6•4h ago
My philosophy is dirt simple:

I am the pointy end of the spear.

orwin•4h ago
I used Sonnet4 to write my last frontend task, fully, with minimal input. It is so much better than ChatGPT it's unbelievable, but while a 6hour coding task was transformed into a 30 minutes supervision task that generated good, but also correct code, I was a bit afraid for new engineers coming into an old project.

How are you supposed to understand code if you don't at least read it and fail a bit?

I'll continue using Sonnet4 for frontend personally, it always had been a pain point in the team and I ended up being the most knowledgeable on it. Unless it's a new code architecture, I will understand what was changed and why, so I have confidence I can handle rapid iteration of code on it, but my coworkers who already struggled with our design will probably have even more struggles.

Sadly I think in the end our code will be worse, but we are a team of 5 doing the work of a team of 8, so any help is welcome (we used to do the work of 15 but our 10x developper sadly (for us) got caught being his excellent self by the CTO and now handle a new project. Hopefully with executive-level pay)

pragmatic•3h ago
LLMs are fantastic at summaries and finding where XYZ happens.

“Where is the customer entity saved to the database?”

data_yum_yum•4h ago
Code is always the bottleneck because people aren’t always thoughtful about how they design it.

Coordination, communication, etc… honestly not that big of a deal if you have the right people. If you are working with the wrong people coordination and communication will never be great no matter what “coordination tools you bring in”. If you have a good team, we can be sending messenger pigeons for all o care and things will still work out.

Just my opinions.

ivolimmen•4h ago
Thank you; this is exactly what was bothering me. This is my opinion as well you just found the words I could not find!
afro88•4h ago
This is a strawman isn't it? I haven't read one post or comment saying that writing code is "the bottleneck".

It's something that takes time. That time is now greatly reduced. So you can try more ideas and explore problems by trying solutions quickly instead of just talking about them.

Let's also not ignore the other side of this. The need for shared understanding, knowledge transfer etc is close to zero if your team is agents and your code is the input context (where the actual code is now at the level that machine code is now: very rarely if ever looked at). That's kinda where we're heading. Software is about to get much grander, and your team is individuals working on loosely connected parts of the product. Potentially hundreds of them.

bob1029•4h ago
I used to think authoring code was the bottleneck. It took a solid decade to learn that alignment of the technology to the business is the actual hard part. Even in the extreme case like a B2B/SaaS product wherein every customer has a big custom code pile. If you have the technology well aligned with the business needs, things can go very well.

We have the technology to make the technology not suck. The real challenge is putting that developer ego into a box and digging into what drives the product's value from the customer's perspective. Yes - we know you can make the fancy javascript interaction work. But, does the customer give a single shit? Will they pay more money for this? Do we even need a web interface? Allowing developers to create cat toys to entertain themselves with is one realistic way to approach the daily cloud spend figures of Figma.

The biggest tragedy to me was learning that even an aggressive incentive model does not solve this problem. Throwing equity and gigantic salaries into the mix only seems to further complicate things. Doing software well requires at least one person who just wants to do it right regardless of specific compensation. Someone who is willing to be on all of the sales & support calls and otherwise make themselves a servant to the customer base.

hbn•33m ago
> alignment of the technology to the business is the actual hard part

Yup. The tough part of my job has always been taking the business requirements and then figuring out what the business ACTUALLY wants. Users will tell you what they want, but users are not designers and usually don't think past what they currently want right now. Give them exactly what they say they want and it will almost never give a good result. You have to navigate consequences of decisions and level-set to find the solution.

LLMs are not good at this and only seem to get worse as "improved" models find users prefer constant yes-manning. I've never had an LLM tell me my idea was flawed and that's a huge issue when writing software.

konovalov-nk•4h ago
Nobody mentioned Joel Spolsky's October 2nd, 2000 article, so I'll start: https://www.joelonsoftware.com/2000/10/02/painless-functiona...

Code is not a bottleneck. Specs are. How the software is supposed to work, down to minuscule detail. Not code, not unit tests, not integration tests. Just plain English, diagrams, user stories.

Bottleneck is designing those specs and then iterating them with end users, listening to feedback, then going back and figuring out if spec could be improved (or even should). Implementing actual improvements isn't hard once you have specs.

If specs are really good -- then any sufficiently good LLM/agent should be able to one-shot the solution, all the unit tests, and all the integration tests. If it's too large to one-shot -- product specs should never be a single markdown file. Think of it more like a wiki -- with links and references. And all you have to do is implement it feature by feature.

intelVISA•3h ago
> How the software is supposed to work, down to minuscule detail.

So... coding. :P

alex_hirner•4h ago
True. Therefore I'm eagerly awaiting an artificially intelligent product manager.

Or I might build that myself.

KronisLV•4h ago
> The marginal cost of adding new software is approaching zero, especially with LLMs. But what is the price of understanding, testing, and trusting that code? Higher than ever.

I’m not sure about that: the code LLMs generate isn’t categorically worse than that written by people who no longer work here and that I can’t ask anything to either. It’s also not much better or worse than what you’d find online but has a more broad reach than my Google-fu, alongside some hallucinations. At the same time, AI doesn’t hate writing tests because it doesn’t get a choice. It doesn’t get breaks and doesn’t half ass things any more or less depending on how close to 5 PM it is.

Maybe my starting point is viewing all code as a liability and not trusting anything anyone (myself included) has written all that much, so the point doesn’t resonate with me that much. That said I have used AI to push out codebases that work, albeit that did take a testable domain and a lot of iteration.

It produces results but also rots my brain somewhat because the actual part of writing code becomes less of a mentally stimulating activity compared to requirements engineering.

pragmatic•4h ago
Yes but no.

It’s decisions.

Ninety five percent is all the decisions made from every person involved.

The fastest delivery I ever encountered were situations where the stakeholders were intimately familiar with the problem and made quick decisions. Only then did speed of coding affect delivery.

In large organizations, PMs are rarely POs. Every decision needs to be run up the flagpole and through committee with CYAs and delays at every step.

Decision makers are outsourcing this to LLMs now which is scary as they are supposed to be the SME.

It’s the same old same old where the generals make decisions but the sergeants (NCOs) really run the army. That’s where I feel leads/principles/staff really make out break the product. They are the fulcrum dealing with LLM from above and below.

injidup•4h ago
LLMs are surprisingly good at writing test cases—something many developers either skip or struggle with. If you structure your workflow around TDD (Test-Driven Development), the LLM can generate and continuously rerun those tests as it iterates on the code. This creates a powerful closed-loop system where the spec (your unit tests) and the implementation evolve together.

Multimodal LLMs take it even further. I've given Claude 4 a screenshot and simply said, “There’s too much white space.” It correctly identified the issue and generated CSS fixes. That kind of feedback loop could easily become a regression test for visual/UI consistency.

This isn’t just about automating code generation—it’s about augmenting the entire development cycle, from specs to testing to visual QA.

superkuh•3h ago
Maybe it was never the bottleneck for paid software engineering at incorporated entities but it was definitely, 100%, the bottleneck for most human people.

And now instead of having to get the help or code from an actual programmer, as a non-programmer but technical person, I can generate or alter any small trivial applications I want. I'm not going to be writing an OS or doing "engineering" but if I want to write a GUI widget to display my PCs temps/etc, or alter a massive complex C++ program to have some feature I want (like adding checkpointing to llama.cpp's fine-tune training), suddenly it's trivial and takes 15 minutes. Before it'd take days if it were feasible without help at all.

perlgeek•3h ago
> The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication.

I can relate :-)

Our team maintains a configuration management database for a company that has grown mostly organically from 3 to 500+ employees in ~30 years.

We don't have documented processes that would account for most of the write operations, so if we have a question, we cannot just talk to the process owner.

The next option would be to talk to the data owner, but for many of our entities, we don't have a data owner. So we look into the audit logs to see which teams often touch the data, and then we do a meeting with some senior folks from each of these teams to discuss things.

But of course, finding common meeting time slots with several senior people from several teams isn't easy, they're all busy. So that alone might delay something by a few weeks to months.

For low-stakes decisions, we often try to not go through this effort, but instead do things that are easy to roll back if they go wrong.

Once we have identified the stakeholders, have a common understanding among them, and a rough consensus on how to proceed, the actual code changes are often relatively simple in comparison.

So, I guess this falls under "overhead of coordination and communication".

netbioserror•3h ago
All of these are the exact reasons I don't go overboard with a custom Vim or Helix setup with all sorts of bells and whistles, and just use stock Sublime. The real problem is never the speed at which I can write code. I need to model a complex problem domain. My choice of language and tools virtually eliminate all boilerplate and focus the effort on those modeling and software design problems. I've tried LLMs multiple times, and each time they've proven they cannot help me.
NiloCK•3h ago
Writing code was never the only bottleneck, and I'm sure there is personal variation, but for me personally it has always been the dominant bottleneck.

My backspace and delete keys loom larger than the rest of my keyboard combined. Plodding through with meager fingers, I could always find fault faster than I could produce functionality. It was a constant, ego-depleting struggle to set aside encountered misgivings for the sake of maintaining forward progress on some feature or ticket.

Now, given defined goals and architectural vision, which I've never been short of, the activation energy for producing large chunks of 'good enough' code to move projects forward is almost zero. Even my own oversized backspace is no match for the torrent.

Again - personal variation - but I expect that I am easily 10x in both in ambition and in execution compared to a year ago.

cloverich•3h ago
It's interesting to think about, but LLMs are perhaps not impacting everyone, even at the same level, the same ways. I'm similarly more productive, and for a different reason. I've always struggled with task persistence when the task is easy and monotonous.... or something. Easy jobs, books, code, that still took a while to do, always took the longest, focus was impossible. Hard tasks, books, classes, etc, i always did the best at. nearly failed school from the easiest course; highest marks in the hardest ones. I've never gotten over this.

That's now melted away. For the first time my mind feels free to think. Everything is moving almost as fast as i am thinking, Im far less bogged down in the slow parts, which the llm can do. I spend so much more time thinking, designing, architecting, etc. Distracting thoughts now turn into completed quality of life features, done on the side. I guess it rewards add, the real kind, in a way that the regular world understandably punishes.

And it must free up mental space for me because i find i can now review others prs more quickly as well. I don't use llm for this, and don't have a great explanation for what is happening here.

Anyways im not sure this is the same issue as yours, and so its interesting to think about what kinds of minds it's freeing, and what kinds its of less use to.

pclowes•3h ago
I have not found LLMs to be most beneficial in straight writing code especially in complex large scale systems. However, I have found them extremely useful at the aspects the author claims they make more difficult: understanding, testing, and trusting that code.

An LLM is an extremely useful search engine. Given access to plenty of CLI tools asking it questions about an unfamiliar code base is extremely helpful. It can read and summarize much faster than I can. I don't trust its exact understanding but I do trust it to give me a high level understanding of an architecture, call out dependencies, summarize APIs, and give me an idea of what parts of the code base are new/old etc.

Additionally, having LLMs write tests after setting up the rough testing structure and giving it a few examples massively decreases the amount of time it takes to prove my understanding of the code through the tests thereby increasing my confidence/trust in it.

tomasreimers•3h ago
I've felt like a broken record the past few weeks, but this.

Authoring has never been the bottle neck, the same way my typing speed has never been the bottle neck.

The bottle neck has been, and continues to be, code review. It was in our pitch deck 4 years ago; it's still there.

For most companies, by default, it's a process that's synchronously blocked on another human. We need to either make it async (stacking) or automate it (better, more intelligent CI), or--ideally---both.

The tools we have are outdated, and if you're a team with more than 50 eng you've already spun up a sub team (devx, dev velocity, or dev productivity) whose job is to address this. Despite that, industry wide, we've still done very little because it's a philosophically poorly understood part of the process (why do we do code review? Like seriously, in three bullet points what's the purpose - most developers realize they haven't thought that deeply here).

https://graphite.dev

Dumblydorr•2h ago
What is the purpose of code review in three points? I’ll take a try, let me know other thoughts!

-functionality, does it work? And is it meeting reqs?

-bug prevention, reliability, not breaking things

-matching of system architecture and best practices for the codebase

Other ideas:

-style and readability

-learning for the junior and less so the senior probably

-checking the “code review” box off your list

hakunin•2h ago
I don't honestly know why most people do code reviews, because it's often presented as some kind of "quick sanity check" or "plz approve". Here's why we do code reviews where I get to lead the practice:

1. Collaborate asynchronously on architectural approach: (simplify, avoid wheel reinvention)

2. Ask "why" questions, document answers in commits and/or comments to increase understanding

3. Share knowledge

4. Bonus: find issues/errors

There are other benefits, like building rapport, getting some recognition for especially great code.

To me code reviews are supposed to be a calm process that takes time, not a hurdle to quickly kick out of the way. Many disagree with me however, but I'm not sure what the alternative is.

Edit: people tend to say reviews are for "bug finding" and "verifying requirements". I think that's at best a bonus side effect, that's too much to ask a person merely reading the code. In my case, code reviews don't go beyond reading the code (albeit deeply, carefully). We do however have QA that is more suited for verifying overall functionality.

tomasreimers•35m ago
This.
kevmo314•35m ago
I've found great benefit in voluntary code reviews. Engineers are self-aware enough that if they're at all worried about a change working they will elect for a voluntary code review. As a reviewer I also feel like my opinion is more welcomed because I know someone chose to do it instead of being forced so, so I pay more attention.

This really gets at the benefits you mention and keeps people aligned with them instead of feeling like code review should be rushed.

peterldowns•2h ago
Hey Tomas, been a while! I like the approach that graphite is taking to AI code review — focus on automating the “lint” or “hey this is clearly wrong” or “you probably wanted to not introduce a security flaw here” type stuff, so that humans can focus on the more important details in a changeset. As your AI reviewers take on more tasks, have your answers to your question (“why do we do code review”) changed at all?
tomasreimers•33m ago
Certainly! A lot less proof reading and pair programming and a lot more architecture / "hey should we be going in this direction" / sharing tribal knowledge

Also hi Peter! Long time :)

TYPE_FASTER•2h ago
1. Define the problem you are trying to solve.

2. Propose a solution to the problem and get feedback.

3. Design the data model(s) and get feedback.

4. Design the system architecture and get feedback.

5. Design the software architecture and get feedback.

6. Write some code and get feedback.

7. Test the code.

8. Let people use the code.

Writing the code is only one step.

In all honesty, I expect over time intelligent agents will be used for the other steps.

But the code is based on the proposed solution, which is based on the problem statement/requirements. The usefulness of the code will only be as good as the solution, which will only be as good as the problem statement/requirements.

kazinator•2h ago
There are times when writing the code is a bottleneck. It's not everyday code. You don't quite know how to write the code. Whatever you try breaks somehow, and you don't readily understand how, even though it is deterministic and you have a 100% repro test case.

An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.

LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.

However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.

sorcercode•2h ago
_about 2 years back. I pushed back hard with a similar argument but I have since come around._

I think the premise is true that writing code was never the "main" bottleneck but like any power tool, when wielded by the right person, it can blow past bottlenecks.

many of these arguments,only assume the case of an inexperienced engineer blindly pumping out and merging code. I concede the problems in this case.

but put this to test with more experienced engineers. how has/does it change their workflows? the results (I've personally observed) are exponentially different.

---

> LLMs reduce the time it takes to produce code, but they haven’t changed the amount of effort required to reason about behavior, identify subtle bugs, or ensure long-term maintainability.

I have to strongly disagree here. this argument doesn't apply universally. I've actually found LLMs to make it easier to understand large swaths of code, faster. especially in larger codebases that have legacy code that no one has worked on or dared to touch. LLMs bring an element of fearlessness, which makes it easier to effect change.

chanux•2h ago
> I've actually found LLMs to make it easier to understand large swaths of code, faster.

If you have written about your workflow related to this outcome, appreciate if you share.

sorcercode•2h ago
gladly. i haven't written about this aspect yet but happy to do that.

and fwiw, i'm also not alone in this observation. I can at least remember 2 times in the last month that, other colleagues have cited this exact same benefit.

e.g - a complicated algo that someone wrote 3 years ago, that's working well enough but has always had subtle bugs. over a 2 day workshop, we start first by writing a bunch of (meaningful) tests with an LLM. then ask the LLM about portions of the code and piecing together why a certain bit of logic existed or was written a certain way, add more tests to confirm working behavior, then start refactoring and changing the algo (also with an LLM).

much of this is similar to how we'd do it without LLMs. but no one has bothered to improve/change it cause the time investment & ROI didn't make sense (let alone the cognitive burden in gathering context from git logs or old timers who have nuggets of context that could be pieced together). with LLMs a lot of that friction can be reduced.

dearilos•2h ago
Agreed. Tribal knowledge and communication is the biggest bottleneck. As soon as your team starts growing, you spend most of your time in communication and not writing code.

This is what I’m working on fixing at wispbit.com

intended•2h ago
I predict that using LLMs is going to be a firing offense.

There will be a 100 justifications for and against it, but in the end you are going to need junior devs.

If said junior dev has not done the work, and an LLM has helped them, you are going to lose your hair walking through the code - every single time.

So you will choose between doing the work yourself, hiring new devs, or making the environment you do your work in become predictable.

We can argue that LLMs are massive piracy monstrosities, with huge amounts of public code in them, or that they are interfering with the ability of people to learn the culture of the company. The argument doesn't matter, because the reasoning being done here is motivated reasoning.

You will not care how LLMs are kept out of the playing pen, you just care that they are out.

So incentives will be structured to ensure that is the case.

What will really be game set and match, will be when some massive disaster strikes because of bad code, and it can either directly or tangentially be linked to LLMs.

linsomniac•1h ago
That is true in some cases. However, there are many cases where writing the code IS the bottleneck. Experiments, trying different approaches, well defined code.

Examples:

This morning Claude Code built a browser-based app that visualizes 3.8M lines of JSON dumps of AWS infrastructure. Attention required by me: 15 minutes. Results: Reasonable for a 1-shot.

A few weeks ago I had it build me a client/server app in multiple flavors from a well defined spec: async, threaded, and select, to see which one was the most clear and easy to maintain.

A few days ago I gave it a 2K line python CLI tool and said "Build me a web interface to this CLI program". It nearly one-shotted it (probably would have if I had the playwright MCP configured).

These are all things I never would have been able to pursue without the LLM tooling in the past because I just don't have time to write the code.

There are definitely cases where the code is not the bottleneck, but those aren't the only cases.

alganet•1h ago
The problem is managing complexity.

That's the only simplification that makes sense and accounts for the different phenomena we see (solo developers doing amazing things exist, teams doing amazing things exist, amazing teachers exist, etc).

There are many ways of doing it. If you understand the problem, and see a big ball of unecessary complexity rising, you get upset.

nkotov•1h ago
We're approaching a future where creativity will be the bottleneck, everything else is going to be abstracted away.
btbuildem•1h ago
> LLMs reduce the time it takes to produce code, but they haven’t changed the amount of effort required to reason about behavior, identify subtle bugs, or ensure long-term maintainability.

I'd argue that they're slowly changing that as well -- you can ask an LLM to "read" code, summarize / review / criticize it. At the least, it can help accelerate onboarding onto new / unfamiliar codebases.

kpen11•28m ago
Whether or not there was a claim that code _was_ the bottleneck, this raises some points that I've been talking over with people for a while now.

Introducing a lever to suddenly produce more code faster creates an imbalance in the SDLC. If our review process was already a bottleneck, now that problem is even worse! If the review bottleneck was something we could tolerate/ignore before, that's no longer the case, we need to solve for it. No, that doesn't mean let some LLM review the code and ship it. CI/CD needs to get better and smarter. As a reviewer, I don't want to be on the lookout for obscure edge cases. I want to make sure my peer solved the problem in a way that makes sense for our team. CI/CD should take care of making sure the code style aligns with our policies, that new/updated tests provide enough coverage for the new/changed functionality, and that the feature actually works.

The code expertise / shared context is another tough problem that needs solving, only highlighted by introducing a random graph of numbers generating the code. Leaning on that one engineer who has been on the team for 30 years and knows where all the deep dark secrets are was not a sustainable path even before coding agents. Having a markdown file that just says "component foo is under /foo. Run make foo to test it" was not documentation. The imbalance in the SDLC will light the fire under our collective asses to provide proper developer documentation and tooling for our codebases. I don't know what that looks like yet. Some teams are trying to have *good* markdown files that actually document where all the deep dark secrets are. These are doubly beneficial because coding agents can use those as well as your humans. But better markdown is probably a small step towards the real fix which we wont be able to live without in the near future.

Anyway, great points brought up in the article. Coding agents aren't going away, so we need to solve this imbalance in the SDLC. Fight fire with fire!