I can write a spec for an entirely new endpoint, and Claude figures out all of the middleware plumbing and the database queries. (The catch: this is in Rust and the SQL is raw, without an ORM. It just gets it. I'm reviewing the code, too, and it's mostly excellent.)
I can ask Claude to add new data to the return payloads - it does it, and it can figure out the cache invalidation.
These models are blowing my mind. It's like I have an army of juniors I can actually trust.
where's the catch? SQL is an old technology, surely an LLM is good with it
In my experience, agentic LLMs tend to write code that is very branchy with cyclomatic complexity. They don't follow DRY principles unless you push them very hard in that direction (and even then not always), and sometimes they do things that just fly in the face of common sense. Example of that last part: I was writing some Ruby tests with Opus 4.6 yesterday, and I got dozens of tests that amounted to this:
x = X.new
assert x.kind_of?(X)
This is of course an entirely meaningless check. But if you aren't reading the tests and you just run the test job and see hundreds of green check marks and dozens of classes covered, it could give you a false sense of securityYou are missing the forest for the trees. Sure, we can find flaws in the current generation of LLMs. But they'll be fixed. We have a tool that can learn to do anything as well as a human, given sufficient input.
This is not an appropriate analogy, at least not right now.
Code Agents are generating code from prompts, in that sense the metaphor is correct. However Agents then read the code and it becomes input and they generate more code. This was never the case for compilers, an LLM used in this sense is strictly not a compiler because it is not cyclic and not directional.
"Generate a Frontend End for me now please so I don't need to think"
LLM starts outputting tokens
Dopamine hit to the brain as I get my reward without having to run npm and figure out what packages to use
Then out of a shadowy alleyway a man in a trenchcoat approaches
"Pssssttt, all the suckers are using that tool, come try some Opus 4.6"
"How much?"
"Oh that'll be $200.... and your muscle memory for running maven commands"
"Shut up and take my money"
----- 5 months later, washed up and disconnected from cloud LLMs ------
"Anyone got any spare tokens I could use?"
Here's $1000. Please do that. Don't bother with the LLM.
"I prompted it like this"
"I gave it the same prompt, and it came out different"
It's not programming. It might be having a pseudo-conversation with a complex system, but it's not programming.
Well I think the article would say that you can diff the documentation, and it's the documentation that is feeding the AI in this new paradigm (which isn't direct prompting).
If the definition of programming is "a process to create sets of instructions that tell a computer how to perform specific tasks" there is nothing in there that requires it to be deterministic at the definition level.
Functions like:
updatesUsername(string) returns result
...can be turned into generic functional euphemism
takeStringRtnBool(string) returns bool
.. same thing. context can be established by the data passed in, external system interactions (updates user values, inventory of widgets)
as workers SWEs are just obfuscating how repetitive their effort is to people who don't know better
the era of pure data driven systems is arrived. in-line with the push to dump OOP we're dumping irrelevant context in the code altogether: https://en.wikipedia.org/wiki/Data-driven_programming
Any bets on software salaries and employment over the next five to ten years?
If the demand for software goes up, we should be okay. If it remains flat, our careers are toast.
Better learn to drive trucks or something.
I wrote a program in C and and gave it to gcc. Then I gave the same program to clang and I got a different result.
I guess C code isn't programming.
gcc and clang produce different assembly code, but it "does the same thing," for certain definitions of "same" and "thing."
Claude and Gemini produce different Rust code, but it "does the same thing," for certain definitions of "same" and "thing."
The issue is that the ultimate beneficiary of AI is the business owner. He's not a programmer, and he has a much looser definition of "same."
This is a completely realistic scenario, given variance between compiler output based on optimization level, target architecture, and version.
Sure, LLMs are non-deterministic, but that doesn't matter if you never look at the code.
I don't think I am. If you ask an LLM for a burger web site, you will get a burger web site. That's the only category that matters.
My brother in Christ, please get off your condescending horse. I have written compilers. I know how they work. And also you've apparently never heard of undefined behavior.
The point is that the output is different at the assembly level, but that doesn't matter to the user. Just as output from an LLM but differ from another, but the user doesn't care.
Well, you sound like an ignorant troll who came here to insult people and start fights. Which also happens a lot on the internet.
Take your abrasive ego somewhere else. HN is not for you.
If one burger website generated uses PHP and the other is plain javascript, which completely changes the way the website has to be hosted--this category matters quite a bit, no?
It matters to you because you're a programmer, and you can't imagine how someone could create a program without being a programmer. But it doesn't really matter.
The non-technical user of the LLM won't care if the LLM generates PHP or JS code, because they don't care how it gets hosted. They'll tell the LLM to take care of it, and it will. Or more likely, the user won't even know what the word "hosting" means, they'll simply ask the LLM to make a website and publish it, and the LLM takes care of all the details.
Feels like the non-programmer is going to care a little bit about paying for 5 different hosting providers because the LLM decided to generate their burger website in PHP, JavaScript, Python, Ruby and Perl in successive iterations.
>"I gave it the same prompt, and it came out different"
1:1 reproducibility is much easier in LLMs than in software building pipelines. It's just not guaranteed by major providers because it makes batching less efficient.
What’s a ‘software building pipeline’ in your view here? I can’t think of parts of the usual SDLC that are less reproducible than LLMs, could you elaborate?
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
I ask the developer the simplest questions, like "which of the multiple entry-points do you use to test this code locally", or "you have a 'mode' parameter here that determines which branch of the code executes, which of these modes are actually used? and I get a bunch of babble, because he has no idea how any of it works.
Of course, since everyone is expected to use Cursor for everything and move at warp speed, I have no time to actually untangle this crap.
The LLM is amazing at some things - I can get it to one-shot adding a page to a react app for instance. But if you don't know what good code looks like, you're not going to get a maintainable result.
The implementations that come out are buggy or just plain broken
The problem is a relatively simple one, and the algorithm uses a few clever tricks. The implementation is subtle...but nonetheless it exists in both open and closed source projects.
LLMs can replace a lot of CRUD apps and skeleton code, tooling, scripting, infra setup etc, but when it comes to the hard stuff they still suck.
Give me a whiteboard and a fellow engineer anyday
toprerules•1h ago
The irony is that I haven't seen AI have nearly as large of an impact anywhere else. We truly have automated ourselves out of work, people are just catching up with that fact and the people that just wanted to make money from software can now finally stop pretending that "passion" for "the craft" was every really part of their motivating calculus.
shahbaby•1h ago
So when things break or they have to make changes, and the AI gets lost down a rabbit hole, who is held accountable?
toprerules•55m ago
My point is that SWEs are living on a prayer that AI will be perched on a knifes edge where there is still be some amount of technical work to make our profession sustainable and from what I'm seeing that's not going to be the case. It won't happen overnight, but I doubt my kids will ever even think about a computer science degree or doing what I did for work.
mjr00•35m ago
toprerules•19m ago
mjr00•13m ago
Quothling•16m ago
I make it sound like I agree with you, and I do to an extend. Hell, I'd want my kids to be plumbers or similar where I would've wanted them to go to an university a couple of years ago. With that said. I still haven't seen anything from AI's to convince me that you don't need computer science. To put it bluntly, you don't need software engineering to write software, until you do. A lot of the AI produced software doesn't scale, and none of our agents have been remotely capable of making quality and secure code even in the hands of experienced programmers. We've not seen any form of changes over the past two years either.
Of course this doesn't mean you're wrong either. Because we're going to need a lot less programmers regardless. We need the people who know how computers work, but in my country that is a fraction of the total IT worker pool available. In many CS educations they're not even taught how a CPU or memory functions. They are instead taught design patterns, OOP and clean architecture. Which are great when humans are maintaining code, but even small abstractions will cause l1-3 cache failures. Which doesn't matter, until it does.
asa400•53m ago
But if your job depends on taste, design, intuition, sociability, judgement, coaching, inspiring, explaining, or empathy in the context of using technology to solve human problems, you’ll be fine. The premium for these skills is going _way_ up.
toprerules•45m ago
hackyhacky•38m ago
We are in this pickle because programmers are good at making tools that help programmers. Programming is the tip of the spear, as far as AI's impact goes, but there's more to come.
Why pay an expensive architect to design your new office building, when AI will do it for peanuts? Why pay an expensive lawyer to review your contract? Why pay a doctor, etc.
Short term, doing for lawyers, architects, civil engineers, doctors, etc what Claude Code has done for programmers is a winning business strategy. Long term, gaining expertise in any field of intellectual labor is setting yourself up to be replaced.