The answer to this is nuanced. You can summon ~30k LoC codebases using CC without much fanfare. Will it work? Maybe, in some of the ways you were thinking -- it will be a simulacrum of the thing you had in your mind.
If something is not working, you need to know the precise language to direct CC (and _if you know this language_, you can use CC like a chisel). If you don't know this language, you're stuck -- but you'd also be stuck if you were writing the thing _by hand_.
In contrast to _writing the thing by hand_, you can ask CC to explain what's going on. What systems are involved here, is there a single source of truth, explain the pipeline to me ...
It's not black and white in the way I experienced this paragraph. The "details" you need to know vary across a broad spectrum, and excellent wizards can navigate throughout the spectrum, and know the meta-strategies to use CC (or whatever agentic system) to unstick themselves in the parts of the spectrum they don't know.
(lots of that latter skill set correlate with being a good programmer in the first place)
Many of my colleagues who haven't used CC as much seem to get stuck in a "one track" frame of mind about what it can and cannot do -- without realizing that they don't know the meta-strategies ... or that they're describing just one particular workflow. It's a programmable programming tool, don't put it into a box.
Bloggers have been kidding themselves for decades about how invigorating programming is, how intellectually demanding it is, how high the IQ demands are, like they're Max von Sydow playing chess with Death on the beach every time they write another fucking unit test. Guess what: a lot of the work programmers do, maybe even most of it, is rote. It should be automated. Doing it all by hand is part of why software is so unreliable.
You have a limited amount of electrical charge in your brain for doing interesting work every day. If you spend it on the rote stuff, you're not going to have it to do actually interesting algorithmic work.
I'm still trying to figure out the answer to that question for myself. Maybe the answer is, "Probably not, and it probably doesn't matter" but I'm still trying to figure out what kind of downstream effects that may have later on my judgment.
Mental expenditure on programming is also not linear through a task; it takes much more energy to get started than to do the back half. Ever stared at an empty function for a minute trying to come up with the right variable name, or choosing which value to compute first? LLMs are geniuses at just getting things started.
I will maybe spend 5-10 minutes reviewing and refining the code with the help of Claude Code and then the rest of the time I will go for another feature/bugfix.
Case in point recently I was working on a mobile app where I had to check for a whole litany of user permissions and present UI to the user if any particular permission was missing, including instructions on how to rectify it.
Super annoying to do manually, but Claude Code was not only able to exhaustively enumerate all possible combos of missing permissions, but also automatically create the UIs for each edge case. I reviewed all of it for accuracy, which took some time.
I probably would've missed some of the more obscure edge cases on my own.
Overall maybe not much faster than doing it myself, but I'm pretty sure the results were substantially better.
While I was manning a booth, this software developer came up to me and said VS had gotten too good at code generation to automate data access, and we should cut it out because that was the vast majority of what he did. I thought he was joking, but no, he was totally serious.
I said something to him about how those tools were saving him from having to do boring, repetitive work so that he could focus on higher value, more interesting development, instead, but he wasn’t having it.
I think about him quite often, especially these days. I wonder if he’s still programming, and what he thinks about LLMs
On the other hand software development in the high sense, i.e. producing solutions for actual problems that real people have, is certainly intellectually demanding and also something that allows for several standard deviations in skill level. It's fashionable to claim we all have bullshit jobs, but I don't think that's a fair description at all.
Absolutely agreed, but I think the idea is that coding tools (or languages, or libraries, or frameworks) frees us to do the actually hard, skill-intensive bits of this, because the thing that's intellectually demanding isn't marshaling and unmarshaling JSON.
Most of this work should go away. Much of the rest of it should be achievable by the domain experts themselves at a fraction of the cost.
Instead, it's the opposite.
You used to have to write tons of real code to stand up something as simple as a backend endpoint. Now a lot of this stuff is literally declarative config files.
Ditto frontends. You used to have to imperative manage all kinds of weird bullshit, but over the last decade we've gradually moved to... declarative, reactive patterns that let the some underlying framework handle the busywork.
We also created... declarative config files to ensure consistent deploys every time. You know, instead of ssh'ing into production machines to install stuff.
We used to handle null pointers, too, and tore our hair out because a single missed check caused your whole process to go poof. Now... it's built into the language and it is physically impossible to pull of a null pointer dereference. Awesome!
We've been "putting ourselves out of work" for going on decades now, getting rid of more boilerplate, more repetitive and error-prone logic, etc etc. We did it with programming languages, libraries, and frameworks. All in the service of focusing on the bits we actually care about and that matter to us.
This is only the latest in a long line of things software engineers did to put themselves out of work. The method of doing it is very new, the ends are not. And like every other thing that came before it, I highly doubt this one will actually succeed in putting us out of work.
It's such a great tool for learning, double checking your work, figuring out syntax or console commands, writing one-off bash scripts etc.
I wonder if some of the disconnect between the AI coding fans and skeptics is just the language they're writing.
Considering the state of today's social media landscape and people's relationship to it, this fills me with dread.
Hopefully it doesn't take 2 decades of AI usage to have that conversation tho.
I'm not sure if this is supposed to be ironic but it gave me a good chuckle nonetheless.
There's also a lot of talk about drinking more moderately down at my local bar.
If there's anything I've learned about software, "intelligent" usually means "we've thrown a lot of intelligent people at the problem, and made the solution even more complicated".
Machine learning is not software, but probably should be approached as such. It's a program that takes some input and transforms it into some output. But I suppose if society really cared about physical or mental health, we wouldn't have had cigarettes or slot machines.
When thinking through a claim of what AI can do, you can do the same. “AI” -> “just some guy”. If that doesn’t feel fair, try “AI” -> “some well-read, eager-to-please intern”.
Sure let's call the AI names, behind its back and to its face if we're feeling particularly bold, but is that actually going to amount to anything?
There’s a reason intermittent rewards are so intoxicating to naturally evolved brains: exploiting systems that give intermittent rewards is a great resource acquisition strategy.
Which will be the case is the interesting question.
No. They would become extremely useful and more magical. Because instead of weird incantations and shamanic rituals of "just one more .rules file, bro, I swear" you could create useful reproducible working tools.
I don't think that's true. I'm wondering if the author has tried Claude 4 Opus.
It's funny, because I do not like the process of software engineering at all! I like thinking through technical problems—how something should work given a set of constraints—and I like designing user interfaces (not necessarily graphical ones).
And I just love using Claude Code! I can tell it what to do and it does the annoying part.
It still takes work, by the way! Even for entirely "vibe coded" apps, I need to think through exactly what I want, and I need to test and iterate, and when the AI gets stuck I need to provide technical guidance to unblock it. But that's the fun part!
1) people who haven't programmed in a while for whatever reason (became executives, took a break from the industry, etc)
2) people who started programming in the last 15 or so years, which also corresponds with the time when programming became a desirable career for money/lifestyle/prestige (chosen out of not knowing what they want, rather than knowing)
3) people who never cared for programming itself, more into product-building
To make the distinction clear, here are example groups unlikely to like AI dev:
1) people who programmed for ~25 years (to this day)
2) people who genuinely enjoy the process of programming (regardless of when they started)
I'm not sure if I'm correct in this observation, and I'm not impugning anyone in the first groups.
But I've been using Claude non-stop this summer on personal projects and I just love the experience!
I've found AI to be a useful tool when using a new library (as long as the version is 2 years old) and in the limited use I've made of agents I can see the potential but also the dangers in wrong + eager hands.
I'm confused—can you expand on this? What's "the work" that you've "grown to hate?" Is it "coding," or is it your "responsibilities?"
We need a new feature. Ok add this controller, add some if statements for this business logic, make some api calls, add this to the db, write some models. Ok done, same thing over and over again.
Id certainly love to be able to do the architecting part and have someone do the work
If work wants me to use it for the job, then sure why not? That too is something new to learn how to do well, will possibly be important for future career growth, and is exciting in a different way. If anything, I’ve got spare mental compute by the end of the week and might even have energy to do my hobbyist stuff.
Win win for me.
I can't enter a flow state since the workflow boils down to waiting and then getting interrupted, and then waiting again. Often the LLM does the wrong thing and then instead of moving to implement another feature, I'm stuck in a loop where I'm trying to get it to fix poor decisions or errors.
It's possible I get a feature implemented faster thanks to agentic LLM, but the experience of overseeing and directing it is dreadful and pretty much invariably I end up with some sort of tech debt slop.
I much prefer the chat interfaces for incorporating LLMs into my workflow.
I actually get to do the job I love which is problem solving.
> people who genuinely enjoy the process of programming (regardless of when they started)
I began programming at 9/10, and it's been one of only a few lifelong passions.But for me, the code itself was always just a means to an end. A tool you use to build something.
I enjoy making things.
That's true, but there's something qualitatively different about writing a song on a guitar vs. prompting to create a song in Suno. The guitar (or piano/Ableton/whatever) is an instrument, whereas Suno is… I'm not really sure.
But that difference makes me totally disinterested in using Suno to produce music. And in the same way — even though I also consider code "just a means to an end" — I'm also totally disinterested in using Claude Code to produce software.
I tend to use Claude Code in 2 scenarios. YOLO where I don’t care what it looks like. One shot stuff I’ll never maintain.
Or a replacement for my real hands on coding. And in many cases I can’t tell the difference after a few days if I wrote it or AI did. Of course I have well established patterns and years of creating requirements for junior devs.
The people most against AI assistance are those that define themselves by what they do, have invested a lot into honing their craft and enjoy the execution of that craft.
I have been getting paid to program for over 35 years, agentic coding is a fresh breeze. https://www.youtube.com/watch?v=YNTARSM-Fjc&list=PLBEB75B6A1...
Software engineering is very different. There's a lot of debugging and tedious work that I don't enjoy, which AI makes so much better. I don't care about CSS, I don't want to spend 4 hours trying to figure out how to make the button centered and have rounded corners. Using AI I can make frontend changes in minutes instead of days.
I don't use the AI to one shot system design, although I may use it to brainstorm and think through ideas.
But if I could press a button and make finished software appear, I would.
It's capability increasing to have new tools, this is most apparent at the entry level but most impactful at the margins: the difficulty of driving a taxi is now zero, driving an F1 car is now harder, but F1 cars might soon break the sound barrier.
This is not a democratizing force at the margins if one bases like/dislike on that.
Most of the people I know use use AI coding tools do so selectively. They pick the right tool for the job and they aren’t hesitant to switch or try different modes.
Whenever I see someone declare that the other side is dead or useless (manual programming or AI coding) it feels like they’re just picking sides in a personal preference or ideology war.
It’s like having an amazing team of super talented junior/mid-level engineers along with some crazy maverick experts in tap.
Like a lot of people here, my earliest memories of coding are of me and my siblings typing games printed in a BASIC book, on a z80 clone, for 30-60 minutes, and then playing until we had to go to bed, or the power went out :) We only got the cassette loading thing years later.
I've seen a lot in this field, but honestly nothing even compares to this one. This one feels like it's the real deal. The progress in the last 2.5 years has been bananas, and by every account the old "AI is the worse it's ever gonna be" seems to be holding. Can't wait to see what comes next.
Of course that's dependent on how caching gets implemented/where/when/how, but it's not unsolvable for common occurrence questions/answers.
As for getting the SOTA questions wrong : we as humans would likely also go through an iterative feedback loop until initial success and experience, too.
Take gemini 2.5 for example. It has enormous useful context. There were gimmicks before, but the usefulness dropped like a stone after 30-40k tokens. Now they work even with 100+k tokens, and do useful tasks at those lengths.
The agentic stuff is also really getting better. 4.1-nano can now do stuff that sonnet 3.5 + a lot of glue couldn't do a year ago. That's amazing, imo. We even see that with open models. Devstral has been really impressive for its size, and I hear good things about the qwen models, tho I haven't yet tried them.
There's also proof that the models themselves are getting better at raw agentic stuff (i.e. they generalise). The group that released swe-agent recently released mini-swe-agent, a ~100 LoC harness that runs Claude4 in a loop with no tools other than "terminal". And they still get to within 10% of their much larger, tool supporting, swe-agent harness on swe-bench.
I don't see the models plateauing. I think our expectations are overinflated.
I definitely don't love the process: design docs, meetings, code review, CI, e2e tests working around infrastructure that acts too good to spin up in my test (postgres what are you doing, I used to init databases on machines less powerful than my watch, you can init in a millisecond in CI).
It is pretty clear to me agents are a key part of getting work done. Some 80% of my code changes are done by an agent. They are super frustrating, just like CI and E2E tests! Sometimes they work miracles, sometimes they turn into a game of wackamole. Like the flaky E2E test that keeps turning your CI red, but keeps finding critical bugs in your software, you cannot get rid of them.
But agents help me make computers do things, more. So I'm going to use them.
Do you know how many times I’ve solved the same boring thing over and over again in slightly different contexts?
Do you know how many things I look at and can see 6 ways to solve it and at least 3 of them will turn out fine?
I can get ai to do all that for me now. I only have to work on the interesting and challenging pieces.
A non-programming example: I do some work in library music. I thoroughly enjoy writing and producing the music itself. I don't like writing descriptions of the music, and I'm not very skillful at making any needed artwork. I don't use AI for the music part, but use AI extensively for the text and artwork.
(I'm also not putting a human out of work here; before using AI for these tasks, I did them myself, poorly!)
The things I've enjoyed writing the most have always been components "good practice" would say I should have used a library for (HTML DOM, databases) but I decided to NIH it and came up with something relatively pleasant and self contained.
When I use LLMs to generate code it's usually to interface to some library or API I don't want to spend time figuring out.
So even before AI my taste in what constitutes the joy of programming evolved and changed. AI lets me waste less time looking up and writing almost-boilerplate shit that I'd have to look up. I'm often writing things in new/different languages that I'll be transparent, I'm not familiar with. I do still look at the code that gets generated (especially when Claude runs itself in circles and I fix it manually), and I roll my eyes when I find egregiously stupid code that it's generated. What I guess separates me then is I just roll my eyes, roll up my sleeves, and get to work, instead of going off on a rant about how the future of programming is stupid, and save even my own journal from a screed about the stupidity of LLMs. Because they do generate plenty of stupid code, but in the course of my career, I'd be lying if I claimed I never have.
As to the big question, do I like AI dev? Given that it may put me out of a job in "several thousand days", it would be easy to hate on it. But just as the world and my career moved on from fat clients on Windows in the 90's, so too will the work evolve to match modern tools, and fighting that isn't worth the energy, imo, better to adapt and just roll with it.
I started learning to program at about the same age I learned to read, so since the late 80s. While I was finishing secondary school, I figured out from first principles (and then wrote) a crude 3D wireframe engine in Acorn BASIC, and then a simple ray caster in REALbasic, while also learning C on classic Mac OS. At university I learned Java, and when I graduated I later taught myself ObjC and swift. One of my jobs, picked up a bit of C++ while there; another, Python. I have too many side projects to keep track of.
Even though I recognise the flaws and errors of LLM generated code, I still find the code from the better models a lot better[0] than a significant fraction of the humans I've worked with. Also don't miss having a coworker who is annoyingly self-righteous or opinionated about what "good" looks like[1].
[0] The worse models are barely on the level of autocomplete — autocomplete is fine, but the worst models I've tried aren't even that.
[1] I appreciate that nobody on the outside can tell if me confidently disagreeing with someone else puts me in the same category as I'm describing. To give a random example to illustrate: one of the people I'm thinking of thought they were a good C++ programmer but hadn't heard of any part of the STL or C++ exceptions and wasn't curious to learn when I brought them up, did a lot of copy-pasting to avoid subclassing, asserted some process couldn't possibly be improved a few hours before I turned it from O(n^2) to O(n), and there were no unit tests. They thought their code was beyond reproach, and would not listen to anyone (not just me) who did in fact reproach it.
This is the sort of thing no one wants to do and leads to burnout.
The AI won't get burnt out going through a static analysis output and simplifying code, running tests, then rerunning the analysis for hours and hours at a time.
And we have no realistic way of taking a drastically refactored application that was the result of hours of changes by an LLM and being confident that it doesn’t introduce bugs or remove load bearing bugs.
Static analysis and test suites aren’t good enough for pushing very large changes to production.
With 3 decades under my belt in the industry I can tell you on trait that THE BEST SWEs ALL have - laziness… if I had to manually do something 3 times, that shit is getting automated… AI dev took automation of mundane parts of our work to another level and I don’t think I could ever code without it anymore
I use Claude Code for two primary reasons:
1. Because whether i like it or not, i think it's going to become a very important tool in our craft. I figure i better learn how to use this shovel and find the value in it (if any), or else others will and leave me behind.
2. Because my motivation outweighs my physical ability to type, especially as i age. I don't have the endurance i once did and so being able to spend more time thinking and less time laboring is an interesting idea.
Claude Code certainly isn't there yet for my desires, but i'm still working on finding the value in it - thinking of workflows to accelerate general dev time, etc. It's not required yet, but my fear is soon enough it will be required for all but fun hobby work. It has potential to become a power tool for a wood workers shop.
> 1) people who programmed for ~25 years (to this day)
> 2) people who genuinely enjoy the process of programming (regardless of when they started)
> I'm not sure if I'm correct in this observation, and I'm not impugning anyone in the first groups.
I’ve been programming for almost 30 years. Started when I was 9 years old and I’ve been looking at code pretty much every day since then.
I love AI coding and leading teams. Because I love solving big problems. Bigger than I can do on my own. For me htat’s the fun part. The code itself is just a tool.
There's nothing wrong with not being a programmer, but it is still kind of funny that "hackers" and their backers approve the script kiddie way by voting.
I don't think the 2) category is universal. There are many people in that category who know that following corporate hype will be rewarded, but I'm not sure they all like vibe coding.
I've been at what I do for 32+ years now, I love programming and I havent stopped since I started.
I love claude code. Why? It increases discoverability in ways far and beyond what a search engine would normally do for me. It gets rid of the need to learn a new documentation format and API for every single project that has different requirements. It makes it less painful to write and deal with languages that represent minor common current trends that will be gone by next year. I no longer have to think about what a waste of time onboarding for ReactCoreElectronChromium is when it'll be gone next year when Googlesoftzon Co folds and throws the baby out with the bathwater.
Now I don't write code unless Claude does it, I just review.
I also compare "AI" to using a Ouija Board. It's not meant for getting real answers or truth. The game is to select the next letter in a word to create a sequence of words. It's an entertainment prop, and LLMs should be treated similarly.
I have also compared "Artificial Intelligence" to artificial flavors. Beaver anus is used as artificial vanilla flavoring (that is a real thing that happens), and "AI"/LLMs are the digital equivalent. Real vanilla tastes so much better, even if it is more expensive. I have no doubt that code written by real humans works much better than AI slop code. Having tried the various "AI" coding assistants, and I am not at all impressed with what it creates. Half the time if I ask for "vanilla", it will tell me "banana".
I guess you could use them like that, but you'll do much better if you try to get an understanding of the problem beforehand. This way you can divide the problem into subtasks with reasonably clear description, that Claude will be able to handle without getting lost and without needing too many corrections.
Also, you'll still need to understand pretty much every task that Claude has implemented. It frequently makes mistakes or does things in a suboptimal way.
When you use AI properly it's a great tool, it's like an IDE on steroids. But it won't take away the need to use your brains.
It proceeded to invent all the SQL fluff rules. And the ones that were actual rules were useless for the actual format that I wanted. I get it, SQLFluff rules are really confusing, but that's why I asked for help. I know how to code python I don't need AI to write code that then I will need to review
It's not different from most people. Everyone runs into AI bullshit. However, hype and new tech optimism overrides everything.
You won't need to set up this stuff for an hour every time, you set it up once and then you just give it commands.
The skill ceiling is deceptive, it feels very low at first (after all, it's just natural language, right?) but getting an intuition for where these tools work best and where they break down, how to task them, what to include in the instructions, it takes a bit of using them first.
That said, if you're a code maximalist, if you get most enjoyment out of hand crafting code or of you don't feel comfortable communicating effectively or delegating work, then maybe these tools aren't for you.
At this point in time I'm also pretty much over proselytizing anyone, I get a TON of value out of this stuff, but everbody has to find their own workflows.
That's why developers are poorly paid and viewed as disposable cogs. It feels "easy" to many people and so they internally think it is immoral to get paid and corporations ruthlessly prey on that feeling. Reality is that development is hard and requires immense mental work, constant upskilling and is not something you can switch off after 5pm and think of something else. Your brain is constantly at work. That work also is creating millions and billions that gets extracted from developers and flow to the greedy hands of the rich, who now control all the means of production (think of cloud services, regulation - try starting your own forum today, anything with user generated content etc.).
Developer did themselves dirty.
My usage of Claude Code sounds different to a lot of ways I hear people using it. I write medium detail task breakdowns with strict requirements and then get Claude to refine them. I review every single line of code methodically, attempting to catch every instance Claude missed a custom rule. I frequently need to fix up broken implementations, or propose complete refactors.
It is a totally different experience to any other dev experience I have had previously. Despite the effort that I have to put in, it gets the job done. A lot of the code written is better than equivalent code from junior engineers I have worked with in the past (or at worst, no worse).
Claude Code (AI coding agents/assistants) are perhaps the best thing to happen to my programming career. Up until this point, the constraint going from vision to reality has always been the tedious process of typing out code and unit tests or spending time tweaking the structure/algorithm of some unimportant subset of the system. At a high level, it's the mental labor of making thousands of small (but necessary) decisions.
Now, I work alongside Claude to fast track the manifestation of my vision. It completely automates away the small exhaustive decision making (what should I name this variable, where should I put this function, I can refactor this function in a better way, etc). Further, sometimes it comes up with ideas that are even better than what I had in my head initially, resulting in a higher quality output than I could have achieved on my own. It has an amazing breadth of knowledge about programming, it is always available, and it never gives up.
With AI in general, I have questions around the social implications of such a system. But, without a doubt, it's delivering extreme value to the world of software, and will only continue the acceleration of demand for new software.
The cost of software will also go down, even though net more opportunities will be uncovered. I'm excited to see software revolutionize the under represented fields, such as schools, trades, government, finance, etc. We don't need another delivery app, despite how lucrative they can be.
Not to impede your overall point, but have you not encountered a situation where Claude gives up? I definitely have, it'll say something like "Given X, Y and Z, your options are [a bunch of things that do not literally but might as well amount to 'go outside and touch grass']."
do while error == true;
Write code
Run code
Read error
Attempt to fix error
Run code
Read error
Search Google for error
Attempt to fix error
Run code
Read error
done
---
Claude does all of this for me now, allowing me to concentrate on the end goal, not the minutiae. It hasn't at all changed my workflow; it just does all of the horribly mundane parts of it for me.
I like it and I recommend it to those who are willing to admit that their jobs aren't all sunshine and roses until the product is shipped and we can sit back and get to work on the next nightmare.
This will keep you out of the bleeding edge feature/product space because you lack a honed skill in actually developing the app. Your skill is now to talk to an LLM and fix nightmare code, not work on new stuff that needs expertise.
Just food for thought.
I found it great to write bash scripts, automation, ffmpeg command lines, OCR, refactoring… it’s a great autocomplete.
Working in a large team I realized that even relying too much of other people’s work is making me understand the technology less and I need to catch up.
If you prompt it correctly and rigidly, and review everything it does large or small, it's a 10x tool for grinding through some very hard problems very quickly.
But it can very easily lead you into overconfidence and bloat. And it can "cheat" without you even realizing it.
Claude Code is best used as an augmentation, not automation tool. And it's best that the tool makers and investors realize that and stop pretending they're going to replace programmers with things like this.
They only work well when combined with a skilled programmer who can properly direct and sift through the results. What they do allow is the ability to focus on higher level thinking and problems and let the tool help you get there.
You still have to really guide the AI, none of this is automatic. Yet I no longer feel the mega joys I once felt hand building something and watching it work correctly. The thrill is gone! Don't know if this is good or bad yet. I don't miss the plumbing bullshit. I do miss the joy.
I compared vibe coding to gambling in one of my recent blog posts and thought that metaphor was slightly uncharitable, but I didn't expect "slot machine" to actually now be the term of art.
The feedback loop of "maybe the next time it'll be right" turned into a few hundred queries resulting in finding the LLM's attempts were a ~20 node cycle of things it tried and didn't work, and now you're out a couple dollars and hours of engineering time.
So slow, untested, and likely buggy, especially as the inputs become less well-conditioned?
If this was a jr dev writing code I’d ask why they didn’t use <insert language-relevant LAPACK equivalent>.
Neither llm outcome seems very ideal to me, tbh.
It's why we get so addicted to gambling. We're built for it because a lot of legitimate real things look like gambling.
With vibe coding, I suspect this is a group that has adapted to it really well (alongside hobby coders and non coders). The thrill comes from problem solving, and when you can try out the solution quickly and validate, it is just addictive. The other side is how open source frameworks have increased, and there are a lot of oss libraries for just about everything. (A personal experience is implementing Cmd bar (like linear) in react when i was just learning. It took me a week or so, and then i just inserted an oss thing for comparison. It was super smooth, but i did not know the assumptions. In production, i will prefer that, and dont always have time to learn and implement from scratch). We see this with langchain etc in LLMs too, and other agentic frameworks as well. The shift is not towards less code but getting the thing to work faster. Claude code accelerates that exponentially as well.
Nothing more satisfying to me than thinking about nifty algorithms, how to wring out every last drop of performance from a system, or more recently train AI models or build agentic systems. From object models to back end, graphics to communications protocols, I love it all.
But that said, I am getting on a bit now and don't particularly enjoy all the typing. When AI rolled around back in 2022 I threw myself into seeing how far I could push it. Copy pasting code back and forth between the chat window and the editor. It was a very interesting experience that felt fresh, even if the results were not amazing.
Now I am a hundred percent using Claude Code with a mixture of other models in the mix. It's amazing.
Yesterday I worked with CC on a CLAP plugin for Bitwig written in C and C++. It did really well - with buffer management, worker threads and proper lock-free data structures and synchronization. It even hand rolled its own WebSocket client! I was totally blown away.
Sure, it needs some encouragement and help here and there, and having a lot of experience for this kind of stuff is important right now for success, but I can definitely see it won't be that way for much longer.
I'm just so happy I can finally get round to all these projects I have had on the back burner all these years.
The productivity is incredible, and things mostly just work. It really brings me a lot of joy and makes me happy.
The interesting thing is going to be how this all holds up in a year or five. I suspect it's going to be a lot of rewriting the same old shit with LLMs again and again because the software they've already made is unmaintainable.
I didn't get into software to labour like a factory worker day in, day out. I like to solve problems and see them deliver value over years and even decades, like an engineer.
I've seen little organic praise from high profile programmers. There aren't many, and once you search they all at least work on a commercial "AI" application.
You could argue, he makes money off of AI with his newsletter and whatnot, so he does stand to gain something, but it's a lot less than the executives and investors who've filled the news.
[1] https://simonwillison.net/ [2] https://en.wikipedia.org/wiki/Django_(web_framework)
I completely agree when using synchronous tools like Windsurf and Cursor. This is why I much prefer the async workflow most of the time. Here you get a chance to think about how the AI should be constrained in order to have the highest probability of a "one shot" PR or at least something that would only require a few edits. I spend a lot of time on the AGENTS.md file and as well as thinking a lot about the prompt that I am going to use. I sometime converse with ChatGPT a little on the prompt as the feedback loop is very fast. Then, just send it and wait ~5 minutes for completion.
I still use synchronous tools sometime for UI and other things were it is hard to specify up front and iteration is required. But realistically, I use async 50:1 over sync.
You cannot write everything using LLMs, you cannot maintain hodgepodge LLM codebases, but also you might want a break from writing scaffolding code or simple functions.
itsthecourier•2h ago
I have delivered many pet projects I always wanted to build and now focus in the operation of them.
no moment in my coding life has been this productive and enlightening. pretty often I sit down to read how the ai solved the issue and learn new techniques and tools.
studying a paper, discussing it with Opus, back and forth, taking notes, make Opus check my notes. it improved a lot my studying sessions too.
I respect the different experiences each of us get from this. for fairness I share mine
elcapitan•2h ago
I have very similar experiences, but very often I wonder if I couldn't just have gone to an open source project on the same topic (e.g. building a toy parser) and read the source code there, instead of letting a code generator AI reproduce it for me. Because that happy feeling is very similar to early moments in my developer career, when I read through open source standard libraries and learned a lot from it.
TheCleric•2h ago
We probably have different personalities but the former is the only part I care about. It’s the operation that bores me.
Disposal8433•2h ago
blibble•2h ago
RobinL•2h ago
I'm probably capable of building all of them by hand, but with a 6yo I'd have never had the time. He loves the games, his mental arithmetic has come on amazingly now he does it 'for fun'.
All code is here: https://github.com/rupertlinacre
Much of this built out of a frustration that most maths resources online are trying to sell you something, full of ads, or poor quality. Just a simple zoomable numberline is hard to find
ChadNauseam•2h ago
[^1]: to be clear, nothing in the frontend is copyrighted. I use some copyrighted works to figure out how common various words are, which I need because I wanted the app to teach the most common words first.
Edit: the site uses the FileSystemWritableFileStream API, so Safari/iOS users will need Safari 26.
tcoff91•2h ago
ChadNauseam•1h ago
troupo•2h ago
ChadNauseam•1h ago
smokel•2h ago
To avoid heated discussions, allow me to illustrate the concept with why enterprise software is mainly built with Java, whereas most blog posts are about writing backends with TypeScript, Python, or Rust. The reason for this is at least twofold:
1. Professional programmers don't get paid to write blog posts, and typically want to spend their free time doing other things. Hobbyists do have the time, but they typically do not see the added value of a boring language such as Java, because they don't work in teams.
2. When something is already known, it is boring for people to read about, and therefore less interesting to write about. When something is new, many people write about it, but it's hard to tell who is right and who is wrong.
Given that good writing, and the additional marketing to find the respective audience, take energy, it is not strange that we find weirdly biased opinions when reading blog posts and forums on the internet. I typically find it better to discuss matters with people I know and trust, or to experiment a bit by myself.
The same might happen now with reporting on AI assisted coding.
Edit: might as well just have said "visibility bias" and "novelty bias" if I had consulted an LLM before commenting.
misja111•2h ago
3. Python has a very fast feedback loop. You type, run and see the result immediately. This is great for small projects and prototypes. Java needs much more work, not just for the compiling but it also needs more boilerplate. This is fine for enterprise projects which typically are very large and prefer stability over speed of development. But at home I don't have the time or the patience for that.
cess11•1h ago
Pharo is an alternative but documentation and libraries just aren't as good, and maybe never will be.
lubujackson•2h ago
But I don't think the code matters as much as the intention. The comment is all about exploration and learning. If you treat your LLM like a Wikipedia dive, you will come out the other end with newfound knowledge. If you only want it to achieve a discrete goal, it may or may not do so, but you will have no way to know which.
micahscopes•2h ago
Tabu search guided graph layout:
https://bsky.app/profile/micahscopes.bsky.social/post/3luh4s...
https://bsky.app/profile/micahscopes.bsky.social/post/3luh4d...
Fast Gaussian blue noise with wgpu:
https://bsky.app/profile/micahscopes.bsky.social/post/3ls3bz...
In both these examples, I leaned on Claude to set up the boilerplate, the GUI, etc, which gave me more mental budget for playing with the challenging aspects of the problem. For example, the tabu graph layout is inspired by several papers, but I was able to iterate really quickly with claude on new ideas from my own creative imagination with the problem. A few of them actually turned out really well.
Sometimes I'll admit that I do treat Claude like a slot machine, just shooting for luck. But in the end that's more trouble than it's worth.
The most fruitful approach is to maintain a solid understanding of what's happening and guide it the whole way. Ask it to prove that it's doing what it says it's doing by writing tests and using debug statements. Channel it toward double checking its own work. Challenge it.
Another thing that worked really well the other day was to use Claude to rewrite some old JavaScript libraries I hand wrote a few years ago in rust. Those kinds of things aren't slot machine problems. Claude code nails that kind of thing consistently.
Ah, one more huge success with code: https://github.com/micahscopes/radix_immutable
I took an existing MIT licensed prefix tree crate and had Claude+Gemini rewrite it to support immutable quickly comparable views. In about one day's work. I scoured the prefix tree libraries available in rust, as well as the various existing immutable collections libraries and found that nothing like this existed. This implementation has decently comprehensive tests and benchmarks.
One more I'll share, an embarrassing failure: https://github.com/micahscopes/splice-weaver-mcp
I used vibe kanban like a slot machine and ended up with a messy MCP server that doesn't really do anything useful that I can tell. Mostly because I didn't have a clear vision when I went into it.
cmrdporcupine•1h ago
I'll now go through this, remove the excessive comments and flowery language, add more tests, put it through its paces. But it did me a service by getting the pieces in place to start.
And I'm even above the 250 karma threshold!