Coding was incredibly fun until working in capitalist companies got involved. It was then still fairy fun, but tinged by some amount of "the company is just trying to make money, it doesn't care that the pricing sucks and it's inefficient, it's more profitable to make mediocre software with more features than really nail and polish any one part"
Adding on AI impacts how fun coding is for me exactly how they say, and that compounds with company's misaligned incentives.
... I do sometimes think maybe I'm just burned out though, and I'm looking for ways to rationalize it, rather than doing the healthy thing and quitting my job to join a cult-like anti-technology commune.
For me I’m vaguely but persistently thinking about a career change, wondering if I can find something of more tangible “real world” value. An essential basis of which being the question of whether any given tech job just doesn’t hold much apparent “real world value”.
AI is just one of those arms races that we imposed on ourselves, with desire to dominate others, or to protect ourselves from such domination. It is irreversible, just like the other things. It survives by using the same tactic of a cheap salesman - tell the first buyer that they can dominate the world, and then tell next buyers that they need to protect themselves from the first one.
We transformed our lifestyles to live with those unnecessary, business/politics driven "advancements". The saga continues.
BTW, electronic calculators, when they came up, did a similar thing, erasing the fun out of calculations by hand.
I'd argue you didn't lose the joy of coding, you lost the illusion that coding made you real, that it made you you.
What plagues me about LLMs is that all that generated code is still around in the project making reviews harder as well s understanding the whole program source. What is in there that makes you prefer this mechanism instead of the abstractions that have been increasingly available since forever?
Maybe I was lucky. For me, the joy was the power of coding. Granted, I'm not employed as a coder. I'm a scientist, and I use coding as a problem solving tool. Nothing I write goes directly into production.
What's gone is the feeling that coding is a special elite skill.
With that said, I still admire and respect the real software developers, because good software is more than code.
I experimented with GPT-5 recently and found its capabilities to be significantly inferior to that of a human, at least when it came to coding.
I was trying to give it an optimal environment, so I set it to work on a small JavaScript/HTML web application, and I divided the task into small steps, as I'd heard it did best under those circumstances.
I was impressed overall by how far the technology has come, but it produced a number of elementary errors, such as putting JavaScript outside the script tags. As the code grew, there was also no sense that it had a good idea of how to structure the codebase, even when I suggested it analyze and refactor.
So unless there are far more capable models out there, we're not at the stage where generative AI can match a human.
In general I find current model to have broad but shallow thinking. They can draw on many sources, which is extremely useful, but seem to have problems reasoning things through in depth.
All this is to say that I don't find the joy of coding to have gone at all. In fact, there's been a number of really thorny problems I've had to deal with recently that I'd love to have side-stepped, but due to the currently limitations of LLMs I had to solve them the old-fashioned way.
GPT-5 what? The GPT-5 models range from goofily stupid to brilliant. If you let it select the model automatically, which is the case by default, it will tend to lean towards the former.
I also briefly tried out some of the other paid-for models, but mostly worked with GPT-5.
The models are one part of the story. But the software around it matters at least as much: what tools does the model have access to, like bash or just file reading or (as in your example!) just a cache of files visited by the IDE (!). How does the software decide what extra context to provide to the model, how does it record past learnings from conversations and failed test runs (if at all!) and how are those fed in. And of course, what are the system prompts.
None of this is about the model; its all "plain old" software, and is the stuff around the model. Increasingly, that's where the quality differences lie.
I am sorry to say but Copilot is just sort of shoddy in this regard. I like Claude, some people like Codex, there are a bunch of options.
But my main point is - its probably not about the model, but about the products built on the models, which can vary wildly in quality.
I think we should step back and ask: do we really want that? What does that imply? Until recently nobody would use a tool and think, yuck, that was inferior of a human.
I find the LLMs struggle constantly with languages there is little documentation or out of date. RAG, LoRA and multiple agents help, but they have their own issues as well.
Still an infinite amount to learn and do. It's still not hard to have more skill than an AI. Of course AI can solve all the dumbbell problems you get in school. They're just there to build muscle. Robots can lift weights better than you, too, but that doesn't mean there's no value in you doing it.
If you find coding boring, explore the frontiers. You will find a lot of coding wilderness where no AI has trod.
this, AI is nothing without data set
so if you working in bleeding edge technology where your tools is only have 3 contributor and a way to access them via IRC channel once a day, things get interesting
Ericson2314•1h ago
Good things to look forward to are:
- Lean and mathlib revolutionizing math
- Typst replacing latex and maybe some adobe prosuc
- Fuschia/Redox/wasi replacing Unix
- non-professional-programmers finally learning programming en mass
I think the latter is maybe the most profound. Tech may not grow at a break-neck pace, but erasing the programmer vs computer illiterate dichotomy will mean software can way the world in much less Kafkaesque ways.
Pamar•1h ago
I've met lots of "digital natives" and they seem to use technology as a black box and click/touch stuff at random until it sorta works but they do not very good at creating at mental model of why something is behaving in a way which is not what was expected and verify their own hypothesis (i.e. "debugging").
maegul•1h ago
And more so with AI software/tools, and IMO frighteningly so.
I don’t know where the open models people are up to, but as a response to this I’d wager they’ll end up playing the Linux desktop game all over again.
All of which strikes at one of the essential AI questions for me: do you want humans to understand the world we live in or not?
Doesn’t have to be individually, as groups of people can be good at understanding something beyond an individual. But a productivity gain isn’t on it’s a sufficient response to this question.
Interestingly, it really wasn’t long ago that “understanding the full computing stack” was a topic around here (IIRC).
It’d be interesting to see if some “based” “vinyl player programming” movement evolved in response to AI in which using and developing tech stacks designed to be comprehensively comprehensible is the core motivation. I’d be down.
righthand•1h ago
I don’t think this is what you think it is. It’s more like non-professional-programmers hacking together all the applications they wanted to hack together before. The Llm is just the glue.
IMO, they are NOT learning programming.
MiiMe19•43m ago
safety1st•1h ago
In the last few years we've seen first Valve with SteamOS, and now 37signals with Omarchy, release Linux distros which are absolutely great for their target audience and function as a general purpose operating system just fine. Once might just be a fluke... Twice is a pattern starting to emerge.
Are we witnessing the beginning of a new operating system ecosystem where you only have to be a billion dollar company to produce a viable operating system instead of a trillion dollar one?
How many of our assumptions about computing are based on the fact that for 30+ years, only Microsoft, Apple and Google got to do a consumer OS?
And a preponderance of the little components that make up this "new" OS ecosystem were developed by some of the most radical software freedom fighters we've got.
Is this a long shot I'm thinking about, you bet. But the last time I was this excited about the future I was a teenager and most homes still didn't have a PC.
trenchpilgrim•36m ago