I see this a lot here
Copyright law, education, just the sheer scale of things changing because of LLMs are some things off the top of my head why "power tools vs carpentry" is a bad analogy.
I beg to differ. There are a whole lot of folks with astonishingly incomplete understanding about all the facts here who are going to continue to make things very, very complicated. Disagreement is meaningless when the relevant parties are not working from the same assumption of basic knowledge.
"Anti-LLM sentiment" within software development is nearly non-existent. The biggest kind of push-back to LLMs that we see on HN and elsewhere, is effectively just pragmatic skepticism around the effectiveness/utility/ROI of LLMs when employed for specific use-cases. Which isn't "anti-LLM sentiment" any more than skepticism around the ability of junior programmers to complete complex projects is "anti-junior-programmer sentiment."
The difference between the perspectives you find in the creative professions vs in software dev, don't come down to "not getting" or "not understanding"; they really are a question of relative exposure to these pro-LLM vs anti-LLM ideas. Software dev and the creative professions are acting as entirely separate filter-bubbles of conversation here. You can end up entirely on the outside of one or the other of them by accident, and so end up entirely without exposure to one or the other set of ideas/beliefs/memes.
(If you're curious, my own SO actually has this filter-bubble effect from the opposite end, so I can describe what that looks like. She only hears the negative sentiment coming from the creatives she follows, while also having to dodge endless AI slop flooding all the marketplaces and recommendation feeds she previously used to discover new media to consume. And her job is one you do with your hands and specialized domain knowledge; so none of her coworkers use AI for literally anything. [Industry magazines in her field say "AI is revolutionizing her industry" — but they mean ML, not generative AI.] She has no questions that ChatGPT could answer for her. She doesn't have any friends who are productively co-working with AI. She is 100% out-of-touch with pro-LLM sentiment.)
Strong disagree right there. I remember talking to a (developer) coworker a few months ago who seemed like the biggest AI proponent on our team. When we were one-on-one during a lunch though, he revealed that he really doesn't like AI that much at all, he's just afraid to speak up against it. I'm in a few Discord channels with a lot of highly skilled (senior and principal programmers) who mostly work in game development (or adjacent), and most of them either mock LLMs or have a lot of derision for it. Hacker News is kind of a weird pro-AI bubble, most other places are not nearly as keen on this stuff.
This is certainly untrue. I want to say "obviously", so clearly there's some kind of disconnect. Below are some examples of negative sentiments programmers have - can you explain why you are not counting these?
NOTE: I am not presenting these as an "LLMs are bad" argument, I'm only listing examples of what drives existing anti-LLM sentiment in programmers.
- Job loss or loss of income
These two are exacerbated by the pace of change, since so many people already spent their lives and money establishing themselves in the career and can't realistically pivot without becoming miserable - this is the same story for every large, fast change - though arguably this one is very large and very fast even by those standards. Lots of tech leadership is focusing even more than they already were on cheap contractors, and/or pushing employees for unrealistic productivity increases. I.e. it's exacerbating the "fast > good" problem, and a lot of leadership is also overestimating how far it reduces the barrier to creating things, as opposed to mostly just speeding up a person's existing capabilities.
- Happiness loss
This is regarding people who enjoy writing/designing programs but don't enjoy directing LLMs; or who don't enjoy debugging the types of mistakes LLMs tend to make, as opposed to the types of mistakes that human devs tend to make. For these people, it's like their job was forcibly changed to a different, almost unrelated job, which can be miserable depending on why you were good at - or why you enjoyed - the old job.
- Uncertainty/skepticism
I'm pushing back on your dismissal of this one as "not anti-LLM sentiment" - the comparison doesn't make sense. If I was forced to only review junior dev code instead of every writing my own code or reviewing experienced dev code, I would be unhappy. And I love teaching juniors! And even if we ignore the subset of cases where it doesn't do a good job or assume it will soon be senior-level for every use case, this still overlaps with the above problem: The mistakes it makes are not like the mistakes a human makes. For some people, it's more unnatural/stressful to keep your eyes peeled for the kinds of mistakes it makes, creating a feeling of lack of control.
- Expertise loss
A lot of positive outcomes with LLMs come from being already experienced. Some argue this will be eroded - both for new devs and existing experienced devs.
How I program with agents - https://news.ycombinator.com/item?id=44221655 - June 2025 (295 comments)
If that is true, why should one invest in learning now rather than waiting for 8 months to learn whatever is the frontier model then?
But if you do want to use LLMs for coding now, not using the best models just doesn't make sense.
I think you (and others) might be misunderstanding his statement a bit. He's not saying that using an old model is harmful in the sense that it outputs bad code -- he's saying it's harmful because some of the lessons you learn will be out of date and not apply to the latest models.
So yes, if you use current frontier models, you'll need to recalibrate and unlearn a few things when the next generation comes out. But in the meantime, you will have gotten 8 months (or however long it takes) of value out of the current generation.
Using agents that interact with APIs represents people being able to own their user experience more. Why not craft a frontend that behaves exactly the the way YOU want it to, tailor made for YOUR work, abstracting the set of products you are using and focusing only on the actual relevant bits of the work you are doing? Maybe a downside might be that there is more explicit metering of use in these products instead of the per-user licensing that is common today. But the upside is there is so much less scope for engagement-hacking, dark patterns, useless upselling, and so on.
But if you try some penny-saving cheap model like Sonnet [..bad things..]. [Better] pay through the nose for Opus.
After blowing $800 of my bootstrap startup funds for Cursor with Opus for myself in a very productive January I figured I had to try to change things up... so this month I'm jumping between Claude Code and Cursor, sometimes writing the plans and having the conversation in Cursor and dump the implementation plan into Claude.Opus in Cursor is just so much more responsive and easy to talk to, compared to Opus in Claude.
Cursor has this "Auto" mode which feels like it has very liberal limits (amortized cost I guess) that I'm also trying to use more, but -- I don't really like to flip a coin and if it lands up head then waste half hour discovering the LLM made a mess the LLM and try again forcing the model.
Perhaps in March I'll bite the bullet and take this authors advice.
You can enjoy it while it lasts, OpenAI is being very liberal with their limits because of CC eating their lunch rn.
I was spending unholy amounts of money and tokens (subsidized cloud credits tho) forcing Opus for everything but I’m very happy with this new setup. I’ve also experimented with OpenCode and their Zen subscription to test Kimi K2.5 an similar models and they also seem like a very good alternative for some tasks.
What I cannot stand tho is using sonnet directly (it’s fine as a subagent), I’ve found it to be hard to control and doesn’t follow detailed instructions.
We have built two of them now, and clearly the state of the art here can be improved. But it is hard to push too much on this while the models keep improving.
┌────────────────────────────┐
│ User │
└──────────────┬─────────────┘
│
▼
┌────────────────────────────┐
│ Agent Harness │
│ (software interface) │
└──────┬──────────────┬──────┘
│ │
▼ ▼
┌────────────┐ ┌────────────┐
│ Models │ │ Tools │
└────────────┘ └────────────┘
Here's an example of a harness with less code: https://github.com/badlogic/pi-mono/blob/fdcd9ab783104285764...The jury's still out on that one, because climate change is an existential risk.
But secondly, there's an entire field of LLM-assisted coding that's being almost entirely neglected and that's code autocomplete models. Fundamentally they're the same technology as agents and should be doing the same thing: indexing your code in the background, filtering the context, etc, but there's much less attention and it does feel like the models are stagnating.
I find that very unfortunate. Compare the two workflows:
With a normal coding agent, you write your prompt, then you have to at least a full minute for the result (generally more, depending on the task), breaking your flow and forcing you to task-switch. Then it gives you a giant mass of code and of course 99% of the time you just approve and test it because it's a slog to read through what it did. If it doesn't work as intended, you get angry at the model, retry your prompt, spending a larger amount of tokens the longer your chat history.
But with LLM-powered auto-complete, when you want, say, a function to do X, you write your comment describing it first, just like you should if you were writing it yourself. You instantly see a small section of code and if it's not what you want, you can alter your comment. Even if it's not 100% correct, multi-line autocomplete is great because you approve it line by line and can stop when it gets to the incorrect parts, and you're not forced to task switch and you don't lose your concentration, that great sense of "flow".
Fundamentally it's not that different from agentic coding - except instead of prompting in a chatbox, you write comments in the files directly. But I much prefer the quick feedback loop, the ability to ignore outputs you don't want, and the fact that I don't feel like I'm losing track of what my code is doing.
But if AI keeps getting better at code, it will produce entire in-silico simulation workflows to test new drugs or even to design synthetic life (which, again, could make us all die, or worse). Yet there is a tiny, tiny chance we will use it to fix some of the darkest aspects of human existence. I will take that.
That's why. I was using Claude the other day to greenfield a side project and it wanted to do some important logic on the frontend that would have allowed unauthenticated users to write into my database.
It was easy to spot for me, because I've been writing software for years, and it only took a single prompt to fix. But a vibe coder wouldn't have caught it and hackers would've pwned their webapp.
no, we don't
Then yeah, it makes sense.
(1) Tooling to enable better evaluation of generated code and its adherence to conventions and norms (2) Process to impose requirements on the creation/exposure of PRDs/prompts/traces (3) Management to guide devs in the use of the above and to implement concrete rewards and consequences
Some organizations will be exposed as being deficient in some or all of these areas, and they will struggle. Better organizations will adapt.
I think this is a neglected area that will see a lot of development in the near future. I think that even if development on AI models stopped today - if no new model was ever trained again - there are still decades of innovation ahead of us in harnessing the models we already have.
Consider ChatGPT: the first release relied entirely on its training data to answer questions. Today, it typically does a few Google searches and summarizes the results. The model has improved, but so has the way we use it.
It might be just me but this reads as very tone deaf. From my perspective, CEOs are seething at the mouth to make as many developers redundant as possible, not being shy about this desire. (I don't see this at all as inevitable, but tech leaders have made their position clear)
Like, imagine the smugness of some 18th century "CEO" telling an artisan, despite the fact that he'l be resigned to working in horrific conditions at a factory, to not worry and think of all the mass produced consumer goods he may enjoy one day.
It's not at all a stretch of the imagination that current tech workers may be in a very precarious situation. All the slopware in the world wouldn't console them.
While the idea of programmers working two hours a day and spending the rest of it with their family seems sunny, that's absolutely not how business is going to treat it.
Thought experiment... CEO has a team of 8 engineers. They do some experiments with AI, and they discover that their engineers are 2x more effective on average . What does the CEO do?
a) Change the workweek to 4 hours a day so that all the engineers have better work/life balance since the same amount of work is being done.
b) Fire half the engineers, make the 4 remaining guys pick up the slack, rinse and repeat until there's one guy left?
Like, come on. There's pushback on this stuff not because the technology is bad, (although it's overhyped), but because the no sane person trusts our current economic system to provide anything resembling humane treatment of workers. The super rich are perfectly fine seeing half the population become unemployed, as far as I can tell, as long as their stock numbers go up.
Not a plug but really that’s exactly why we’re building sandboxes for agents with local laptop quality. Starting with remote xcode+sim sandboxes for iOS, high mem sandbox with Android Emulator on GPU accel for Android.
No machine allocation but composable sandboxes that make up a developer persona’s laptop.
If interested, a quick demo here https://www.loom.com/share/c0c618ed756d46d39f0e20c7feec996d
muvaf[at]limrun[dot]com
almostdeadguy•2h ago
gadflyinyoureye•1h ago
Applejinx•1h ago
Or less.
And I don't think it's collar color they're going to be checking against.
So I guess I'm saying I agree that this is powerful and dangerous. These are language models, so they're more effective against humans and their languages. And self-preservation, empathy, humanity do not play a role as there is nobody in there to be offended at the notion of intentionally killing more than 9/10 of humanity… for some definitions of humanity, ones I'm sympathetic to.
NitpickLawyer•1h ago
First, we currently have 4 frontier labs, and a bunch of 2nd tier ones following. The fact that we don't have just oAI or just Anthropic or just Google is good in the general sense, I would say. The 4 labs racing each other and trading SotA status for ~a few weeks is good for the end consumer. They keep each other honest and keep the prices down. Imagine if Anthropic could charge 60$ /MTok or oAI could charge 120$ /MTok for their gpt4 style models. They can't in good part because of the competition.
Second, there's a bunch of labs / companies that have released and are continuing to release open mdoels. That's as close to "intelligence on tap" as you can get. And those models are ~6-12 months behind the SotA models, depending on your usecase. Even though the labs have largely different incentives to do so, a lot of them are still releasing open models. Hopefully that continues to hold. So not all control will be in the hands of big tech, even if the "best" will still be theirs. At some point "good enough" is fine.
There's also the thing about geopolitics being involved in this. So far we've seen the EU jumping the gun on regulation, and we're kinda sorta paying for it. Everyone is still confused about what can or cannot be done in the EU. The US seems to be waiting to see what happens, and China will do whatever they do. The worst thing that can happen is that at some point the big players (Anthropic is the main driver) push for regulatory capture. That would really suck. Thankfully atm there's this lingering thinking that "if we do it, the others won't so we'll be on the back foot". Hopefully this holds, at least until the "good enough" from above is out :)
almostdeadguy•8m ago
The AI labs started down this path using the Manhattan Project as a metaphor and guess what? It's a good metaphor and we should embrace most of the wider implications of that (though I'd love to avoid all the MAD bullshit this time).