Thanks. The author touched something there, close to a truth (or deep belief I got ?) about our life, something about the journey mattering more than the destination...
In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.
1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.
2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.
3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.
Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).
Guess I will not be a good parent lol.
Not to mention that context length is limited, so if you told it something "earlier" then your statement has probably already dropped off the end of the context window.
What works better is to prompt with positive instructions of intent like:
"Working exclusively in file(s) ____ and ____ implement ____ in a manner similar to how it is done in example file ______".
I start a fresh chat for each prompt, with fresh context, and try to keep all instructions embedded within a single prompt rather than relying on fragile past state that may or may not have dropped off the end of the context window. If there is something like "don't touch these core files" or "work exclusively in folder X" that I want it to always consider, then I add it as a system prompt or global rule file (ensures that the instruction gets included automatically on every prompt).
And don't get me wrong I get frustrated with AI sometimes too, but the frustration has declined dramatically as I've learned how to prompt it: appropriate task sizes, how to use positive statements rather than negative, how to gather the appropriate context for it to steer behavior and output, etc.
I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt
My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do
I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between
I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!
The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!
It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out
I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless
I've been pretty deeply into LLMs myself since 2023 and built several small models myself from scratch and (SFT) trained many more so it's not like I'm ignorant of how it works, I'm just not getting the workflow results.
If you're not doing tasks that are statistically common in the training data however you're not going to have a great experience. That being said, very little in software is "novel" anymore so you might be surprised.
We used to caution people not to copy and paste from StackOverflow without understanding the code snippets, now we have people generating "vibe code" from nothing using AI, never reading it once and pushing it to master?
It feels like an insane fever dream
I'm guessing you don't care about quality very much, since you are focusing on your output volume
I couldn't even get Zed hooked up to GitHub Copilot. I use ChatGPT for snippets and search and it's okay but I don't want to bother checking its work on a large scale
I think I blacked out when my brain tried to process this phrase.
Nothing personal, but I automatically discount all claims like this (something something require extraordinary evidence and all that…).
My experience is in one of the areas that people are saying it is most helpful
Which really just adds to the gaslighting effect
This could be cope but I don't think it is.
The quality of LLM code is consistently average at best, and usually quite bad imo. People say it is like a junior, but if a junior I hired produced such code consistently and never improved I would be recommending the company PIP them out.
Having output like a Junior would be fine, if I didn't have to fix it myself. As it stands, I've never been able to get it to produce code of the quality I want so I have to spend more time fixing it than I would just writing it.
I dunno. It sucks man
I don't think this is it, personally.
For me, if I spent the time testing 3 different models I would definitely be slower than writing the code myself
The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.
They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.
Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.
Most of my experience has been similar to yours. But yesterday, out of the blue, it spit out a commit that I accepted almost verbatim (just added some line breaks and stuff). I was actually really surprised: not only it followed the existing codebase conventions and variable naming style, but also introduced a couple of patterns that I haven't thought of (and I liked).
But it also charged me $2 for the privilege :) (On a related note, Gemini API has become noticeably more expensive compared to, say, a month ago.)
I find that with Aider managing context (what files you add to it) can make all the difference.
My impression is that artists are even more hostile than the most AI-skeptic of software engineers. In large part, this is likely because the economic argument doesn't hold much sway. For the large majority of artists, it's hard for them to make money with art as is, the bottleneck is not the volume of art they can produce. There's a much clearer path to turning "more code" into "more money", even if it's still not direct.
But to get there it might be a good move to code for yourself (and read books).
Then on the other hand coding will not be a fun job anymore...
breckenedge•4h ago
What a weird alternate universe it is that I live in. My managers are somewhat skeptical of AI workflows and keep throwing up roadblocks to deeper and more coordinated use among my colleagues. Probably because there is so much churn, and it’s difficult to replicate the practice from one engineer to another. Some of my colleagues are very resistant to using AI. I use it quite extensively, but rate limits mean that there are occasions when I must pick up where the machine leaves off.