• LLMs generate junk
• LLMs generate a lot of junkBut we are, even as of Opus 4.5, so wildly far away from what the author is suggesting. FWIW my experience is working in the AI/ML space at a major tech company and as a maintainer + contributor of several OSS projects.
People are blindly trusting LLMs and generating mountains of slop. And the slop compounds.
But I still write my own code. If I'm going to be responsible for it, I'm going to be the one who writes it.
It's my belief that velocity up front always comes at a cost down the line. That's been true for abstractions, for frameworks, for all kinds of time-saving tools. Sometimes that cost is felt quickly, as we've seen with vibe coding.
So I'm more interested in using AI in the research phase and to increase the breadth of what I can work on than to save time.
Over the course of a project, all approaches, even total hand-coding with no LLMs whatever, likely regress to the mean when it comes to hours worked. So I'd rather go with an approach that keeps me fully in control.
Why not output everything in C and ASM for 500x performance? Why use high level languages meant to be easier for humans? Why not go right to the metal?
If anyone's ever tried this, it's clear why: AI is terrible at C and ASM. But that cuts into what AI is at its core: It's not actual programming, it's mechanical reproduction.
Which means its incapabilities in C and ASM don't disappear when using it for higher-level languages. They're still there, just temporarily smoothed over due to larger datasets.
I haven't tried C or ASM yet, but it has been working very well with a C++ project I've been working on, and I'm sure it would do reasonably well with bare-bones C as well.
I'd be willing to bet it would struggle more with a lower-level language initially, but give it a solid set of guardrails with a testing/eval infrastructure and it'll get its way to what you want.
Qt in your example is a part. You're application is the whole. If you replaced Qt with WxWidgets, is your application still the same application?
But to answer your question, to replace Qt with you're own piecemeal code doesn't do anything more to Qt than replacing it with WxWidgets would: nothing. The Qt code is gone. The only way it would ship-of-theseus itself into "still being Qt, despite not being the original Qt" would be if Qt required all modifications to be copyright-assigned and upstreamed. That is absurd. I don't think I've ever seen a license that did anything like that.
Even though licenses like the GPL require reciprocal FOSS release in-kind, you still retain the rights to your code. If you were ever to remove the GPL'd library dependency, then you would no longer be required to reciprocate. Of course, that would be a new version of your software and the previous versions would still be available and still be FOSS. But neither are you required to continue to offer the original version to anyone new. You are only required to provide the source to people who have received your software. And technically, you only have to do it when they ask, but that's a different story.
And now, I have a tool to do a (shuffled if I want) beat-matched mix of all the tracks in my db which match a certain tag expression. "(dnb | jungle) & vocals", wait a few minutes, and play a 2 hour beat-matched mix, finally replacing mpd's "crossfade" feature. I have a lot of joy using that tool, and it was definitely fun having it made. clmix[1] is now something I almost use daily to generate club-style mixes to listen to at home.
Drop Python: Use Rust and Typescript
https://matthewrocklin.com/ai-zealotry/#big-idea-drop-python...
Because the conciseness and readability of the code that I use is way more important than execution speed 99% of the time.
I assume that people who use AI tools still want to be able to make manual changes. There are hardly any all or nothing paradigms in the tech world, why do you assume that AI is different?
"Our ability to zoom in and implement code is now obsolete Even with SOTA LLMs like Opus 4.5 this is downright untrue. Many, many logical, strategic, architectural, and low level code mistakes are still happening. And given context window limitations of LLMs (even with hacks like subagents to work around this) big picture long-term thinking about code design, structure, extensibility, etc. is very tricky to do right."
If you can't see this, I have to seriously question your competence as an engineer in the first place tbh.
"We already do this today with human-written code. I review some code very closely, and other code less-so. Sometimes I rely on a combination of tests, familiarity of a well-known author, and a quick glance at the code to before saying "sure, seems fine" and pressing the green button. I might also ask 'Have you thought of X' and see what they say.
Trusting code without reading all of it isn't new, we're just now in a state where we need to review 10x more code, and so we need to get much better at establishing confidence that something works without paying human attention all the time.
We can augment our ability to write code with AI. We can augment our ability to review code with AI too."
Later he goes onto suggest that confidence is built via TDD. Problem is... if the AI is generating both code and tests, I've seen time and time again both in internal projects and OSS projects how major assumptions are incorrect, mistakes compound, etc.
And I asked codex to fix them for me, first attempt was to add comments to disable the rules for the whole file and just mark everything as any.
Second attempt was to disable the rules in the eslint config.
It does the same with tests it will happily create a work around to avoid the issue rather than fix the issue.
> If you can't see this, I have to seriously question your competence as an engineer in the first place tbh.
I can't agree more strongly. I work with a number of folks who say concerning things along the lines of what you describe above (or just slightly less strong). The trust in a system that is not fully trustworthy is really shocking, but it only seems to come from a particular kind of person. It's hard to describe, but I'd describe it as: people that are less concerned with the contents of the code versus the behaviour of the program. It's a strange dichotomy, and surprising every time.
I mean, if you don't get the economics of a reasonably factored codebase vs one that's full of hacks and architecturally terrible compromises - you're in for a VERY bad time. Perhaps even a company-ending bad time. I've seen that happen in the old days, and I expect we're in the midst of seeing a giant wave of failures due to unsustainably maintained codebases. But we probably won't be able to tell, startups have been mostly failing the entire time.
These are exactly the types of people who LOVE ai because it produces code of similar quality an functionality that they would produce by hand.
And that's what it feels like now. We have the "old school" developers who consider CS to be equivalent to math, and we have these other people like you mention who are happy if the code seems to work 'enough'. "Hackers" have been around for decades but in order to get anything real done, they generally had to be smart enough to understand the code themselves. Now we're seeing the rise of the unskilled hacker, thanks to AI...is this creating the next generation of script kiddies?
"The skillset you've spend decades developing and expected to continue having a career selling? The parts of it that aren't high level product management and systems architecture are quickly becoming irrelevant, and it's your job to speed that process along" isn't an easy pill to swallow.
This simply is a mediocre take, sometimes I feel like people never actually coded at all to have such opinions
Please don't do this here. Thoughtful criticism is fine on this site but snark and name-calling are not.
https://news.ycombinator.com/newsguidelines.html
Edit: on closer look, you've been breaking the HN guidelines so badly and so consistently that I've banned the account. Single-purpose accounts aren't allowed here in any case.
AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)
Or maybe it's analogous to the skeuomorphic phase of desktop software. Clumsy application of previous paradigm to new one; new wine in old bottles; etc.
You're what, 250 years behind at this point?
Since the dawn of the industrial revolution there is a general trend that fewer can make more with less. And really even bigger than AI were fast fuel based transportation and then global networks. Long before we started worrying about genAI, businesses have been consolidating down to a few corporations that make enough to supply the world from a singular large factories.
We fought the war against companies. Companies won.
Now you're just at the point where the fabric makers were, where the man with the pick axe was, where the telephone switch operator was, where the punch card operator was.
Maybe don't speak for all of us.
Or you do, but you believe it's worth it because your software helped more patients, or improved the overall efficiency and therefore created more demand and jobs - a belief many pro-AI people hold as well.
Patient outcomes are significantly better with modern technology.
> You just don't care about them.
Yeah, okay.
Experienced engineers can successfully vibe code? By definition it means not reading the output.
If you’re not reading your output, then why does skill level even matter?
Do we want everyone to operate at PM level? The space for that is limited. Its easy to say you enjoy vibe coding when you are high up the chain but for most devs we are not as experienced or lucky to be able to feel stable when workflows change every day.
But I dont feel I have enough data to believe whether vibe coding or hand coding is better, I am personally doing tedious task with AI, and still writing code by hand all the time.
Also the author presents rewriting Numpy in rust as some achievement but the AIs most probably trained on Numpy and RustyNum, AI are best at copying the code so its not really a big thing.
AI can take you down a rabbit hole that makes you feel like you are being productive but the generated code can be a dead end because of how you framed the problem to the AI.
Engineers need enough discipline to understand the problems they are trying to solve before delegating a solution to a stochastic text generator.
I don’t always like using AI but have found it helpful in specific use cases such as speeding up CI test pipelines and writing spec; however, someone smarter than me/more familiar with the problem space may have better strategies that I cannot of think of, and I have been fooled by randomness.
With AI you have to be careful to know what is important, you dont want to waste your time doing random stuff that may not even get a single user. If its for fun, thats fine but if you want to build a business or improve your output, I would advise people to choose well.
--dangerously-skip-permissions
If run in a devcontainer[1][2], the worst thing that can happen is it deletes everything in the filesystem below the mounted repo. Recovery would entail checking out the repo again.1. (conventional usage) https://code.visualstudio.com/docs/devcontainers/containers
2. (actual spec) https://containers.dev/
Source please. If it's contained (as in Claude runs INSIDE the container, not outside while having access to it) I don't understand how it technically could blue pill out of it. If it were to be able to leave the container then the container code would be updating accordingly to patch whatever exploit was found somehow. So I don't believe this but maybe I'm wrong, hence why I'm asking for a reference.
What I always find amusing are the false equivalences, where one or several (creative) processes involving the hard work that is a fundamental part of the craft get substituted by push-to-"I did this!1!!" slop.
How's the saying go? "I hate doing thing x. The only thing I hate more is not doing thing x". One either owns that, or one doesn't. So that is indeed not mysterious. Especially not in a system where "Fake it till you make it" has been and is advertised as a virtue.
Can someone tell me what the current thinking is on how we'll get over that gap?
To use LLMs effectively, you have to be an excellent problem-solver with complex technical problems. And developing those skills has always been the goal of CS education.
Or, more bluntly, are you going to hire the junior with excellent LLM skills, or are you going to hire the junior with excellent LLM skills and excellent technical problem-solving skills?
But they do have to be able to use these tools in the modern workplace so we do cover some of that kind of usage. Believe me, though, they are pretty damned good at it without our help. The catch is when students use it in a cheating way and don't develop those problem-solving skills and then are screwed when it comes time to get hired.
So our current thinking is there's no real shortcut other than busting your ass like always. The best thing LLMs offer here is the ability to act as a tutor, which does really increase the speed of learning.
You spent the proverbial 10k hours like before. I don't know by AI has to lead to the lack of learning. I don't find people stop learning digital painting so far, even digital painting, from my perspective, is even more "solved" than programming by machines.
I heard that Pixar had a very advanced facial expression simulation system a decade ago. But I am very willing to bet that when Pixar hires animators they still prefer someone who can animate by hand (either in Maya or frame-by-frame on paper).
The code interleaves rules and control flow, drops side effects like “exit” in functions and hinges on a stack of regex for parsing bash.
This isn’t something I’ve attempted before but it looks like a library like bashlex would give you a much cleaner and safer starting point.
For a “throwaway” script like this maybe it’s fine, but this is typical of the sort of thing I’m seeing spurted out and I’m fascinated to see what people’s codebases look like these days.
Don’t get me wrong, I use CC every day, but man, you do need to fight it to get something clean and terse.
https://gist.github.com/mrocklin/30099bcc5d02a6e7df373b4c259...
I feel both that I can move faster and operate in areas that were previously inaccessible to me (like frontend). Experienced developers should all be doing this. We're good enough to avoid AI Slop, and there's so much we can accomplish today."
If frontend was "inacessible" and AI makes it "accessible", I would argue that you don't really know frontend and should probably not be doing it professionally with AI. Use AI, yes but learn frontend without AI first. And his "Experienced developers should all be doing this" is ridiculous. He should be honest and confess that he doesn't like programming. He probably enjoys systems design or some sort of role involving product design that does not involve programming. But none of these people are "developers".
>> [...]
>>No, you’re not too good to vibe code. In fact, you’re the only person who should be vibe coding.
All we have to do is produce more devs with 20 years of experience and we'll be set. :)
https://www.stochasticlifestyle.com/a-guide-to-gen-ai-llm-vi...
If you know what the fuck you're doing, they're incredible. Scary so.
... there are some serious costs and reasonable
reservations to AI development. Let's start by listing
those concerns
These are super-valid concerns. They're also concerns that
I suspect came around when we developed compilers and
people stopped writing assembly by hand, instead trusting
programs like gcc ...
Compilers are deterministic, making their generated assembly code verifiable (for those compilers which produce assembly code). "AI", such as "Claude Code (or Cursor)" referenced in the article, is nondeterministic in their output and therefore incomparable to a program compiler.One might as well equate the predictability of a Fibonacci sequence[0] to that of a PRNG[1] since both involve numbers.
0 - https://en.wikipedia.org/wiki/Fibonacci_sequence
1 - https://en.wikipedia.org/wiki/Pseudorandom_number_generator
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "afplay -v 0.40 /System/Library/Sounds/Morse.aiff"
}]}],
"Notification": [
{
"hooks": [
{
"type": "command",
"command": "afplay -v 0.35 /System/Library/Sounds/Ping.aiff"
}]}]
These are nice but it's even nicer when Claude is talking when it needs your attentionEasy to implement -> can talk to ElevenLabs or OpenAI and it's a pretty delightful experience
Neither this nor the discussion here so far mentions ethics. It should.
According to latest reports AI now consumes more water than the global bottled water industry. These datacenters strain our grids and where needs can't be met they employ some of the least efficient ways to generate electricity generating tons of pollution. The pollution and the water problems are hitting poorer communities as the more affluent ones can afford much better legal pushback.
Next, alas, we can't avoid politics. The shadow that Peter Thiel and a16z (who named one of the two authors of the Fascist Manifesto their patron saints) casts over these tools is very long. These LLMs are used as a grand excuse to fire a lot of people and also to manufacture fascist propaganda on a scale you have never seen before. Whether these were goals when Thiel & gang financed them or not, it is undeniable they are now indispensable in helping the rise of fascism in the United States. Even if you were to say "but I am using code only LLMs" you are still stuffing the pockets of these oligarchs.
The harm these systems cause is vast and varied. We have seen them furthering suicidal ideation in children and instructing them on executing these thoughts. We have seen them generating non-consensual deepfakes at scale including those of children.
deergomoo•10h ago
But saying that AI development is more fun because you don’t have to “wrestle the computer” is, to me, the same as saying you’re really into painting but you’re not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.
lawlessone•10h ago
In b4 someone mentions some famous artists had apprentices under them.
I might start watching golf, and everytime someone else get's the ball in the hole i'll take credit for it. "Did you see what did there? "
quotemstr•10h ago
What if I have a block of marble and a vision for the statue struggling from inside it and I use an industrial CNC lathe to do my marble carving for me. Have I sculpted something? Am I an artist?
What if I'm an architect? Brunelleschi didn't personally lay all the bricks for his famous dome in Florence --- is it not architecture? Is it not art?
deergomoo•10h ago
I would also call designing a system to be fed into an LLM designing. But I wouldn’t call it programming.
If people are more into the design and system architecture side of development, I of course have no problem with that.
What I do find baffling, as per my original comment, is all the people saying basically “programming is way more fun now I don’t have to do it”. Did you even actually like programming to begin with then?
somebehemoth•10h ago
Of course not everyone who programs AI style hate programming, but I do think your take explains a large chunk of zealotry: It has become Us v. Them for both sides and each is staking out their territory. Telling the vibe coder they are not programming hurts their feelings much like telling a senior developer all their accumulated experience and knowledge is useless if not today, for sure some day soon!
quotemstr•8h ago
I think it's legitimate that someone might enjoy the act of creation, broadly construed, but not the brick-by-brick mechanics of programming.
diamond559•5h ago
CharlesW•10h ago
Some people find software architecture and systems thinking more fun than coding. Some people find conducting more fun than playing an instrument. It's not too mysterious.
fridder•10h ago
thewebguyd•7h ago
I don't mind ops code though. I dislike building software as in products, or user-facing apps but I don't mind glue code and scripting/automation.
Don't ask me to do leetcode though, I'll fail and hate the experience the entire time.
dionian•5h ago
ajcp•10h ago
I like this. I'm going to see if my boss will go for me changing my title from Solutions Architect to Solutions Commissioner. I'll insist people refer to me as "Commissioner ajcp"
tsukikage•10h ago
Indeed, of all the possible things to say!
AI "development" /is/ wrestling the computer. It is the opposite of the old-fashioned kind of development where the computer does exactly what you told it to. To get an AI to actually do what I want and nothing else is an incredibly painful, repetitive, confrontational process.
prisenco•10h ago
9dev•9h ago
You very likely have some of these toil problems in your own corner of software engineering, and it can absolutely be liberating to stop having to think about the ape and the jungle when all you care about is the banana.
tsukikage•9h ago
Using English, with all its inherent ambiguity, to attempt to communicate with an alien (charitably) mind very much does /not/ make this task any easier if the thing you need to accomplish is of any complexity at all.
vanviegen•8h ago
bitwize•3h ago
Sanchez's Law of Abstraction applies. You haven't abstracted anything away, just added more shit to the pile.
bossyTeacher•7h ago
No, it is not. What you are doing is something not too different from asking your [insert here freelance platform] hired remote dev to make an app and enter a cycle of testing the generated app and giving feedback, it is not wrestling the computer.
zephen•2h ago
For people like me, anything that makes the computer more human-like is a step in the wrong direction, and feels much more like wrestling.
sodapopcan•10h ago
I don't care if you use AI but leave me alone. I'm plenty fast without it and enjoy the process this author callously calls "wrestling with computers."
Of course this isn't going to help with the whole "making me fast at things I don't know" but that's another can of worms.
rileymichael•10h ago
sodapopcan•9h ago
At the same time, one of the best developers I worked with was a two-finger typist who had to look at the keyboard. But again, I don't care if you're going to use AI (well, that's not entirely true but not going to get into it) but the tone of this article that "You should learn it, " I take issue with.
harles•10h ago
I think it’s a bit like a gambling addiction. I’m riding high the few times it pays off, but most of the time it feels like it’s just on the edge of paying off (working) and surely the next prompt will push it over the edge.
gamerdonkey•10h ago
https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gam...
awfulneutral•10h ago
harles•10h ago
I feel this exactly. I’ve been one of the biggest champions of the tech in my org in spite of the frequent pain I feel from it.
somekyle2•8h ago
rileymichael•10h ago
just.. uninstall it? i've removed all ai tooling from both personal+work devices and highly recommend it. there's no temptation to 'quickly pull up $app just to see' if it doesn't exist
aspenmartin•10h ago
The OP is right and I feel this a lot: when Claude pulls me into a rabbit hole, convinces me it knows where to go, and then just constantly falls flat on its face and we waste like several hours together, with a lot of all caps prompts from me towards the end. These sessions last in a way that he mentions: "maybe its just a prompt away from working"
But I would never delete CC because there are plenty of other instances where it works excellent and accelerates things quite a lot. And additionally, I know we see a lot of "coding agents are getting worse!" and "METR study proves all you AI sycophants are deluding yourselves!" and I again understand where these come from, agree with some of the points they raise, but honestly: my own personal perception (which I argue is pretty well backed up by benchmarks and by Claude's own product data which we don't see -- I doubt they would roll out a launch without at least one or more A/B tests) is that coding agents are getting much better, and that as a verifiable domain these "we're running out of data!" problems just aren't relevant here. The same way alphago gets superhuman, so will these coding agents, it's just a matter of when, and I use them today because they are already useful to me.
rileymichael•9h ago
aspenmartin•8h ago
harles•10h ago
It does _feel_ like the value and happiness will come some versions down the road when I can actually focus on orchestration, and not just bang my head on the table. That’s the main thing that keeps me from just removing it all in personal projects.
rileymichael•9h ago
vorticalbox•8h ago
I do this a lot and it’s super helpful.
falloutx•10h ago
I am also now experimenting with my own version of opencode and I change models a lot, and it helps me learn how each model fails at different tasks, and it also helps me figure out the most cost effective model for each task. I may have spent too much time on this.
kleinsch•10h ago
Yoric•10h ago
In both cases, it works because I can mostly detect when the output is bullshit. I'm just a little bit scared, though, that it will stop working if I rely too much on it, because I might lose the brain muscles I need to detect said bullshit.
dionian•5h ago
ToucanLoucan•10h ago
I love this job but I can absolutely get people saying that AI helps them not "fight" the computer.
vlod•3h ago
Once you've done it, you'll hopefully never have to do it again (or at worse be derivatives). Over time you'll have a collection of 'how to do stuff'.
I think this is the path to growth. Letting a LLM do it for you is equivalent to it solving a hard leetcode problem. You're not really taxing your brain.
thewebguyd•7h ago
And for me (and other ops folks here I'd presume), that is the fun part. Sad, from my career perspective, that it's getting farmed out to AI, but I am glad it helps you with your side projects.
briansteffens•10h ago
libraryofbabel•10h ago
I am happy to accept that some people still prefer to write out their code by hand… that’s ok? Keep doing it if you want! But I would gently suggest you ask yourself why you are so offended by people that would prefer to automate much of that, because you seem to be offended. Or am I misreading?
And hey, I still enjoy solving interesting problems with code. I did advent of code this year with no LLM assistance and it was great fun. But most professional software development doesn’t have that novelty value where you get to think about algorithms and combinatorical puzzles and graphs and so on.
Before anyone says it, sure, there is a discussion to be had about AI code quality and the negative effects of all this. A bad engineer can use it to ship slop to production. Nobody is denying that. But I think that’s a separate set of questions.
Finally, I’m not sure painting is the best analogy. Most of us are not creating works of high art here. It’s a job, to make things for people to use, more akin to building houses than painting the Sistine Chapel. Please don’t sneer at use if we enjoy finding ways to put up our drywall quicker.
dolebirchwood•10h ago
lynx97•10h ago
aspenmartin•10h ago
It's like the articles point: we don't do assembly anymore and no one considers gcc to be controversial and no one today says "if you think gcc is fun I will never understand you, real programming is assembly, that's the fun part"
You are doing different things and exercising different skillsets when you use agents. People enjoy different aspects of programming, of building. My job is easier, I'm not sad about that I am very grateful.
Do you resent folks like us that do find it fun? Do you consider us "lesser" because we use coding agents? ("the same as saying you’re really into painting but you’re not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.") <- I don't really care if you consider this "true" painting or not, I wanted a painting and now I have a painting. Call me whatever you want!
lunar_mycroft•9h ago
The compiler reliably and deterministically produces code that does exactly what you specified in the source code. In most cases, the code it produces is also as fast/faster than hand written assembly. The same can't be said for LLMs, for the simple reason that English (and other natural languages) is not a programming language. You can't compile English (and shouldn't want to, as Dijkstra correctly pointed out) because it's ambiguous. All you can do is "commission" another
> Do you resent folks like us that do find it fun?
For enjoying it on your own time? No. But for hyping up the technology well beyond it's actual merits, antagonizing people who point out it's shortcomings, and subjecting the rest of us to worse code? Yeah, I hold that against the LLM fans.
aspenmartin•8h ago
> But for hyping up the technology well beyond it's actual merits, antagonizing people who point out it's shortcomings, and subjecting the rest of us to worse code? Yeah, I hold that against the LLM fans.
Is that what I’m doing? I understand your frustration. But I hope you understand that this is a straw man: I can straw man the antagonists and AI-hostile folks but the point is the factions and tribes are complex and unreasonable opinions abound. My stance is that people can dismiss coding agents at their peril, but it’s not really a problem: taking the gcc analogy, in the early compiler days there was a period where compilers were weak enough that assembly by hand was reasonable. Now it would be just highly inefficient and underperformant to do that. But all the folks that lamented compilers didn’t crumble away, they eventually adapted. I see that analogy as being applicable here, it may be hard to see the insanity of coding agents because we’re not time travelers from 2020 or even 2022 or 3. But this used to be an absurd idea and is now very serious and highly adopted. But still quite weak!! Still we’re missing key reliability and functionality and capabilities. But if we got this far this fast, and if you realize that coding agent training is not limited in the same way that e.g. vanilla LLM training is by being a verifiable domain, we seem to be careening forward. But by nature of their current weakness, absolutely it is reasonable not to use them and absolutely it is reasonable to point out all of their flaws.
Lots of unreasonable people out there, my argument is simply: be reasonable.
bossyTeacher•6h ago
Novelty isn't necessarily better as a replacement of what exists. Example: blockchain as fancy database, NFTs, Internet Explorer, Silverlight, etc.
aspenmartin•6h ago
Jonovono•10h ago
AndrewKemendo•10h ago
I have found in my software writing experience that the majority of what I want to write is boiler plate with small modifications but most of the problems are insanely hard to diagnose edge cases and I have absolutely no desire nor is it a good use of time in my opinion to deal with structural issues in things that I do not control.
The vast majority of code you do not control because you aren’t the owner of the framework or library your language or whatever and so the Bass majority of software engineering is coming up with solutions to foundational problems of the tools you’re using
The idea that this is the only true type of software engineering is absurd
True software engineering is systems, control and integration engineering.
What I find absolutely annoying is that there’s this rejection of the highest level Hofstetter level of software architecture and engineering
This is basically sneered at over the idea of “I’m gonna go and try to figure out some memory management module because AMD didn’t invest in additional SOC for the problems that I have because they’re optimized for some Business goals.”
It’s frankly junior level thinking
lacy_tinpot•10h ago
You're never really wrestling the computer. You're typically wrestling with the design choices and technological debt of decisions that were in hindsight bad ones. And it's always in hindsight, at the time those decision always seem smart.
Like with the rise of frameworks, and abstractions who is actually doing anything with actual computation?
Most of the time it's wasting time learning some bs framework or implementing some other poorly designed system that some engineer that no longer works at the company created. In fact the entire industry is basically just one poorly designed system with technological debt that grows increasingly burdensome year by year.
It's very rarely about actual programming or actual computation or even "engineering". But usually just one giant kludge pile.
pixl97•10h ago
raw_anon_1111•10h ago
Development is solely to exchange labor for money.
I haven’t written a single line of code “for fun” since 1992. I did it for my degree between 1992-1996 while having fun in college and after that depending on my stage in life, dating, hanging out with friends, teaching fitness classes and doing monthly charity races with friends, spending time with my wife and (step)kids, and now enjoying traveling with my wife and friends, and still exercising
forgetfulness•10h ago
Well, I'll have to take their word for it that they're passionate about maximizing shareholder value by improving key performance indicators, I know I personally didn't sign up for being in meetings all day to leverage cross functional synergies with the goal of increasing user retention in sales funnels, or something along those lines.
I'm not passionate about either that or mandatory HR training videos.
xnx•10h ago
Creating software has a similar number of steps. AI tools now make some of them much (much) easier/optional.
falloutx•9h ago
BeetleB•8h ago
At home, I never had the time/will to be as thorough. Too many other things to do in life. Pre-LLMs, most of my personal scripts are just - messy.
One of the nice things with LLM assisted coding is that it almost always:
1. Gives my program a nice interface/UI
2. Puts good print/log statements
3. Writes tests (although this is a hit or miss).
Most of the time it does it without being asked.
And it turns out, these are motivation multipliers. When developing something, if it gives me good logs, and has a good UI, I'm more likely to spend time developing it further. Hence, coding is now more joyful.
And it turns out, these tend to
dang•5h ago
You've got a good analogy there though, because many great and/or famous painters have used teams of apprentices to produce the work that bears their (the famous artist's) name.
I'm reminded also of chefs and sous-chefs, and of Harlan Mill's famous "chief surgeon plus assistants" model of software development (https://en.wikipedia.org/wiki/Chief_programmer_team). The difference in our present moment, of course, is that the "assistants" are mechanical ones.
(as for how fun this is or isn't - personally I can't tell yet. I don't enjoy the writing part as much - I'd rather write code than write prompts - but then also, I don't enjoy writing grunt code / boilerplate etc., and there's less of that now, - and I don't enjoy having to learn tedious details of some tech I'm not actually interested in in order to get an auxiliary feature that I want, and there's orders of magnitude less of that now, - and then there are the projects and programs that simply would never exist at all if not for this new mechanical help in the earliest stages, and that's fun - it's a lot of variables to add up and it's all in flux. Like the French Revolution, it's too soon to tell! - https://quoteinvestigator.com/2025/04/02/early-tell/)
vercaemert•5h ago
i like what software can do, i don't like writing it
i can try to give the benefit of the doubt to people saying they don't see improvements (and assume there's just a communication breakdown)
i've personally built three poc tools that proved my ideas didn't work and then tossed the poc tools. ive had those ideas since i knew how to program, i just didn't have the time and energy to see them through.
williamcotton•5h ago
The “lone genius” image is largely a modern romantic invention.
grugagag•4h ago
rukuu001•5h ago
mattwilsonn888•4h ago
Programming a system at a low-level from scratch is fun. Getting CSS to look right under a bunch of edge cases - I won't judge that programmer too harshly for consulting the text machine.
This is especially true considering it's these shallow but trivia-dominated tasks which are the least fun and also which LLMs are the most effective at accomplishing.
mrocklin•3h ago
zephen•2h ago