If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?
> Bob Slydell: What you do at Initech is you take the specifications from the customer and bring them down to the software engineers?
> Tom Smykowski: Yes, yes that's right.
> Bob Porter: Well then I just have to ask why can't the customers take them directly to the software people?
> Tom Smykowski: Well, I'll tell you why, because, engineers are not good at dealing with customers.
> Bob Slydell: So you physically take the specs from the customer?
> Tom Smykowski: Well... No. My secretary does that, or they're faxed.
> Bob Porter: So then you must physically bring them to the software people?
> Tom Smykowski: Well. No. Ah sometimes.
> Bob Slydell: What would you say you do here?
The agents are the engineers now.
It's a bit like eating junk food everyday and ah sometimes I go see the doctor he keep saying I should eat more healthy and lose some weight.
Yet, there is no way a product manager without any coding experience could have done it. First, the API needed to communicate to the main app correctly such as formatting, correcting data. This required human engineer guidance and experience working with expected data. AI was lost. Second, the API was designed extremely poorly. You first had to make a request, then retry a second endpoint over and over again while the Chinese API did its thing in the background. Yes, I had to poll it. I had to test many requests to make sure it was reliable (it wasn't). In the end, I gave a recommendation that we shouldn't rely on this Chinese company and back out of the deal before we send them a huge deposit.
A non-technical PM couldn't have done what I did... for at least a few more years. You need a background and experience in software development to even know what to prompt the AI.
I still have a job. I'm very bullish on AI as well.
Everything just changed. Fundamentally.
If you don't adapt to these tools, you will be slower than your peers. Few businesses will tolerate that.
This is competitive cycling. Claude is a modern bike with steroids. You can stay on a penny farthing, but that's not advised.
You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.
What remains to be seen is how many of us the market needs and how much the market will pay us.
I'm hoping demand and comp remain constant, but we'll see.
We need ownership in these systems ASAP.
The management has decided that the latter is preferable for short term gains.
That's what so many of you are not getting.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
> Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than Ubisoft and their Anno 117 game for a proof.Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
When I notice a genAI image, I force myself to stop and inspect it closely to find what nonsensical thing it did.
I've found something every time I looked, since starting this routine.
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.
I have a small set of steps that I follow to really boost my productivity and get the speed advantage.
(Note: I am talking about AI-coding and not Vibe-coding) - You give all the specs, and there are "some" chances that LLM will generate code exactly required. - In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors. - This will definitely increase speed 2x-3x, but we still need to review everything. - Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem
1. Figure out a potential solution
2. Make a hacky POC script to verify the proposed solution actually solves the problem
3. Design a decently robust system as a first iteration (that can have bugs)
4. Implement using AI
5. Verify each generated line
6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.
WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.
But yeah, if anybody can do it, the salaries are going to plummet. You don't need a CS degree to tell the AI to try again.
(Color me skeptical.)
I’ve spent enough time working with cross-functional stakeholders to know that the vast majority of PM (whether of the product, program, or project variety), will not be capable of running AI towards any meaningful software development goal. At best they can build impressive prototypes and demos, at worst they will corrupt data in a company-destroying level of failure.
There are few skills that are both fun and highly valued. It's disheartening if it stops being highly valued, even if you can still do it in private.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
I'm not pretending. I'm only sad.
It's at least possible that we would eventually do a rollback to status quo and swear to never devalue human knowledge of the problems we solve.
Love this way of putting it. I hate that we can mostly agree that devaluing expertise of artists or musicians is bad, but that devaluing the experience of software engineers is perfectly fine, and actually preferable. Doing so will have negative downstream effects.
The cult has its origins in taylorism - a sort of investor religion dedicated to the idea that all economic activity will eventually be boiled down to ownership and unskilled labor.
I take issue even with this part.
First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.
Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.
Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.
I could use AI to churn out hundreds of thousands of lines of code that doesn't compile. Or doesn't do anything useful, or is slower than what already exists. Does that mean I'm less productive?
Yes, obviously. If I'd written it by hand, it would work ( probably :D ).
I'm good with the machine milled lumber for the framing in my walls, and the IKEA side chair in my office. But I want a carpenter or woodworker to make my desk because I want to enjoy the things I interact with the most. And don't want to have to wonder if the particle board desk will break under the weight of my many monitors while I'm out of the house.
I'm hopeful that it won't take my industry too long to become inoculated to the FUD you're spreading about how soon all engineers will lose their job to vibe coders. But perhaps I'm wrong, and everyone will choose the LACK over the table that last more than most of the year.
I haven't seen AI do anything impressive yet, but surely it's just another 6mo and 2B in capex+training right?
For one, a power tool like a bandsaw is a centaur technology. I, the human, am the top half of the centaur. The tool drives around doing what I tell it to do and helping me to do the task faster (or at all in some cases).
A GenAI tool is a reverse-centaur technology. The algorithm does almost all of the work. I’m the bottom half of the centaur helping the machine drive around and deliver the code to production faster.
So while I may choose to use hand tools in carpentry, I don’t feel bad using power tools. I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
It’s a bit different.
That has more to do with how much demand there is for what you're doing. With software eating the world and hardware constraints becoming even more visible due to the chips situation, we can expect that there will be plenty of work for SWE's who are able to drive their coding agents effectively. Being the "top" (reasoning) or the "bottom" half is a matter of choice - if you slack off and are not highly committed to delivering quality product, you end up doing the "bottom" part and leaving the robot in the driver's seat.
The Ludddites were workers who lived in an era without any social or state protections for labourers. Capitalists were using child labour to operate the looms because it was cheaper than paying anyone a fair wage. If you didn’t like the conditions you could go work as an indentured servant for the state in the work houses.
Luddites used organized protests in the form of collective violence to force action when they had no other leverage. People were literally shot or jailed for this.
It was a horrible part of history written by the winners. That’s why everyone thinks Luddites were against technology and progress instead of social reforms and responsibility.
There were carpenters who refused to use power tools, some still do. They are probably happy -- and that's great, all the power to them. But they're statistically irrelevant, just as artisanal hand-crafted computer coding will be. There was a time when coders rejected high level languages, because the only way they felt good about their code is if they handcrafted the binary codes, and keyed them directly into the computer without an assembler. Times change.
Lots of the complains about agents sound identical to things I've heard and even said myself about junior engineers.
That said, there's always going to need to be people who can reach below the abstraction and agentic coding loops deprive you of the ability to get those reps in.
Prior to GPS and a navigation device, you would either print out the route ahead of time, and even then, you would stop at places and ask people about directions.
Post Google Maps, you follow it, and then if you know there's a better route, you choose to take a different path and Google Maps will adjust the route accordingly.
Code isn’t really like that. Hand written code scales just like AI written code does. While some projects are limited by how fast code can be written it’s much more often things like gathering requirements that limits progress. And software is rarely a repeated, one and done thing. You iterate on the existing product. That never happens with furniture.
How much is coding actually the bottleneck to successful software development?
It varies from project to project. Probably in a green field it starts out pretty high but drops quite a bit for mature projects.
(BTW, "mature" == "successful", for the most part, since unsuccessful projects tend to get dropped.)
Not that I'm not AI-denier. These are great tools. But let's not just swallow the hype we're being fed.
A few even make a good living by selling their artisanal creations.
Good for them!
It's great when people can earn a living doing what they love.
But wool spinning and cloth weaving are automated and apparel is mass produced.
There will always be some skilled artisans who do it by hand, but the vast majority of decent jobs in textile production are in design, managing machines and factories, sales and distribution.
A friend of mine reposted someone saying that "AI will soon be improving itself with no human intervention!!" And I tried asking my friend if he could imagine how an LLM could design and manufacture a chip, and then a computer to use that chip, and then a data center to house thousands of those computers, and he had no response.
People have no perspective but are making bold assertion after bold assertion
If this doesn't signal a bubble I don't know what does
There's going to be minimal "junior" jobs where you're mostly implementing - I guess roughly equivalent to working wood by hand - but there's still going to be jobs resembling senior level FAANG jobs for the foreseeable future.
Someone's going to have to do the work, babysit the algorithm, know how to verify that it actually works, know how to know that it actually does what it's supposed to do, know how to know if the people who asked for it actually knew what they were asking for, etc.
Will pay go down? Who knows. It's easy to imagine a world in which this creates MORE demand for seniors, even if there's less demand for "all SWEs" because there's almost zero demand for new juniors.
And at least for some time, you're going to need non-trivial babysitting to get anything non-trivial to "just work".
At the scale of a FAANG codebase, AI is currently not that helpful.
Sure, Gemini might have a million token context, but the larger the context th worse the performance.
This is a hard problem to solve, that has had minimal progress in what - 3 years?
If there's a MAJOR breakthrough on output performance wrt context size - then things could change quickly.
The LLMs are currently insanely good at implementing non-novel things in small context windows - mainly because their training sets are big enough that it's essentially a search problem.
But there's a lot more engineering jobs than people think that AREN'T primarily doing this.
Psst ==> https://www.youtube.com/watch?v=k6eSKxc6oM8
MY project (MIT licensed) ...
Also they’re booked out two months in advance.
Make of that what you will.
Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.
It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.
I have never thought of that aspect! This is a solid point!
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
At least when I write by hand, I have a deep and intimate understanding of the system.
We don't stand a chance and we know it.
Drugs, alcoholism, overeating, orgies, doom scrolling, gambling.
Addictions are a problem or danger to humans, no doubt. But we don't stand a chance? I'm not sure the evidence supports your argument.
I feel more lost and unsure instead of good - because I didn't write the code, so I don't have its internal structure in my head and since I didn't write it there's nothing to be proud of.
Your control over the code is your prompt. Write more detailed prompts and the control comes back. (The best part is that you can also work with the AI to come up with better prompts, but unlike with slop-written code, the result is bite-sized and easily surveyable.)
I tried writing a small utility library using Windows Copilot, just for some experience with the tach (OK, not the highest tech, but I am 73 this year) and found it mildly impressive, but quite slow compared to what I would have done myself to get some quality out of it. It didn't make me feel good, particularly.
If they don’t like it, take it away. I just won’t do that part because I have no interest in it. Some other parts of the project, I do enjoy working on by hand. At least setting up the patterns I think will result in simple readable flow, reduce potential bugs, etc. AI s not great at that. It’s happy to mix strings, nulls, bad type castings, no separation of concerns, no small understandable functions, no reusable code, etc. which is th part i enjoy thinking about
Also “pull records from table X and display them in a data grid. Include a “New” button and associated functionality respecting column constraints in the database. Also add an edit and delete button for each row in the table”. God, it’s really nice to have an LLM get that 85% of the way done in maybe 2 min.
Has there been any sort of paradigm shift in coding interviews? Is LLM use expected/encouraged or frowned upon?
If companies are still looking for people to write code by hand then perhaps the author is onto something, if however we as an industry are moving on, will those who don't adapt be relegated to hobbyists?
It’s going to take a while.
But I guess that's nothing new.
I think the 10 lines of code people worry their jobs now become obsolete. In cases where the code required googling how to do X with Y technology, that's true. That's just going to be trivially solvable. And it will cause us to not need as many developers.
In my experience though, the 10 lines of finicky code use case usually has specific attributes:
1. You don't have well defined requirements. We're discovering correctness as we go. We 'code' to think how to solve the problem, adding / removing / changing tests as we go.
2. The constraints / correctness of this code is extremely multifaceted. It simultaneously matters for it to be fast, correct, secure, easy to use, etc
3. We're adapting a general solution (ie a login flow) to our specific company or domain. And the latter requires us to provide careful guidance to the LLM to get the right output
It may be Claude Code around these fewer bits of code, but in these cases its still important to have taste and care with code details itself.
We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
I'm gonna assume you think you're in the other camp, but please correct me if I'm mistaken.
I'd say I'm in the 10 lines of code camp, but I'd say that group is the least afraid of fictionalized career threat. The people that obsess over those 10 lines are the same people who show up to fix the system when prod goes down. They're the ones that change 2 lines of code to get a 35% performance boost.
It annoys me a lot when people ship broken code. Vibe coded slop is almost always broken, because of those 10 lines.
At the same time I make enough silly mistakes hand coding it feels irresponsible to NOT have a coding LLM generate code. But I look at all the code and (gasp) make manual changes :)
No ones care about a random 10 lines of code. And the focus of AI hypers on LoC is disturbing. Either the code is correct and good (allows for change later down the line) or it isn't.
> We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
You do remember how easy it is to do `git clone`?
The question to me becomes whether the PM -> engineering handoff outdated? Should they be the same person? Does it collapse to one skillet for this work?
That is exactly the type of help that makes me happy to have AI assistance. I have no idea how much electricity it consumed. Somebody more clever than me might have prompted the AI to generate the other 100 loc that used the struct to solve the whole problem. But it would have taken me longer to build the prompt than it took me to write the code.
Perhaps an AI might have come up with a more clever solution. Perhaps memorializing a prompt in a comment would be super insightful documentation. But I don't really need or want AI to do everything for me. I use it or not in a way that makes me happy. Right now that means I don't use it very much. Mostly because I haven't spent the time to learn how to use it. But I'm happy.
Us humans are expensive part of the machine.
Bean counters don't care about creativity and art though, so they'll never get it.
I think though it is probably better for your career to churn out lines, it takes longer to radically simplify, people don’t always appreciate the effort. Plus instead if you go the other way, increase scope and time and complexity that more likely will result in rewards to you for the greater effort.
You could look back throughout human history at the inventions that made labor more efficient and ask the same question. The time-savings could either result in more time to do even more work, or more time to keep projects on pace at a sane and sustainable rate. It's up to us to choose.
I also like writing code by hand, I just don't want to maintain other people's code. LMK if you need a job referral to hand refactor 20K lines of code in 2 months. Do you also enjoy working on test coverage?
Succinctly: process over product.
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
I very much enjoy the actively of writing code. For me, programming is pure stress relief. I love the focus and the feeling flow, I love figuring out an elegant solution, I love tastefully structuring things based on my experience of what concerns matter, etc.
Despite the AI tools I still do that: I put my effort into the areas of the code that count, or that offer intellectually stimulating challenge, or where I want to make sure to explore manually think my way into the problem space and try out different API or structure ideas.
In parallel to that I keep my background queue of AI agents fed with more menial or less interesting tasks. I take the things I learn in my mental "main thread" into the specs I write for the agents. And when I need to take a break on my mental "main thread" I review their results.
IMHO this is the way to go for us experienced developers who enjoy writing code. Don't stop doing that, there's still a lot of value in it. Write code consciously and actively, participate in the creation. But learn to utilize and keep busy agents in parallel or when you're off-keyboard. Delegate, basically. There's quite a lot of things they can do already that you really don't need to do because the outcome is completely predictable. I feel that it's possible to actually increase the hours/day focussing on stimulating problems that way.
The "you're just mindlessly prompting all day" or "the fun is gone" are choices you don't need to be making.
In fact, it's even worse - driving a car is one of the least happy modes of getting around there is. And sure, maybe you really enjoy driving one. You're a rare breed when it comes down to it.
Yet it's responsible by far for the most people-distance transported every day.
There’s talk of war in the state of Nationstan. There are two camps: those who think going to war is good and just, and those who think it is not practical. Clearly not everyone is pro-war. There are two camps. But the Overton Window is defined with the premise that invading another country is a right that Nationstate has and can act on. There are by definition (inside the Overton Window) no one who is anti-war on the principle that the state has no right to do it.[2]
Not all articles in this AI category are outright positive. They range from the euphoric to the slightly depressed. But they share the same premise of inevitability; even the most negative will say that, of course I use AI, I’m not some Luddite[3]! It is integral to my work now. But I don’t just let it run the whole game. I copy–paste with judicious care. blah blah blah
The point of any Overton Window is to simulate lively debate within the confines of the premises.
And it’s impressive how many aspects of “the human” (RIP?) it covers. Emotions, self-esteem, character, identity. We are not[4] marching into irrelevance without a good consoling. Consolation?
[1] https://news.ycombinator.com/item?id=44159648
[2] You can let real nations come to mind here
This was taken from the formerly famous (and controversial among Khmer Rouge obsessed) Chomsky, now living in infamy for obvious reasons.
[3] Many paragraphs could be written about this
[4] We. Well, maybe me and others, not necessarily you. Depending on your view of whether the elites or the Mensa+ engineers will inherit the machines.
LLMs are not good enough for you to set and forget. You have to stay nearby babysitting it, keeping half an eye on it. That's what's so disheartening to many of us.
In my career I have mentored junior engineers and seen them rapidly learn new things and increase their capabilities. Watching over them for a shirt while is pretty rewarding. I've also worked with contract developers who were not much better than current LLMs, and like LLMs they seemed incapable of learning directly from me. Unwilling even. They were quick to say nice words like, "ok, I understand, I'll do it differently next time," but then they didn't change at all. Those were some of the most frustrating times in my career. That's the feeling I get when using LLMs for writing code.
I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.
And others are not able to believe the (not extreme) but visible speed boost from pragmatic use of AI.
And sadly, whenever the discussion about the collective financial disadvantage of AI to software engineers will start and wherever it goes…
The owners and employers will always make the profits.
I am not responsible for choosing whether the code I write using a for loop or while loop. I am responsible for whether my implementation - code, architecture, user experience - meets the functional and non functional requirements. It’s been well over a decade that my responsibilities didn’t require delegation to other developers doing the work or even outsourcing an entire implementation to another company like a SalesForce implementation.
Now that I have more experience and manage other SWEs, I was right, that stuff was dumb and I'm glad that nobody cares anymore.
For me, LLMs are joyful experiences. I think of ideas and they make them happen. Remarkable and enjoyable. I can see how someone who would rather assemble the furniture would like to do that.
ramesh31•1h ago
taway1874•1h ago
skerit•1h ago
Yes, it really is.
rpodraza•1h ago
hirako2000•1h ago