In your model, I give enough guidance to generally know what AI is doing, and AI is finishing what you started.
It's supposed to be consumed by LLMs to help prepare them to provide better examples - maybe a newer version of a library than is in the model's training data for example.
I've often thought rather than an MCP server of this that my LLM agent can query, maybe i just want to query this high signal to noise resource myself rather than trawl the documentation.
What additional value does an LLM provide when a good documentation resource exists?
I've also used it in the past to look up windows api, since I haven't coded for windows in decades. (For the equivalent of pipe, fork, exec.) The generated code had a resource leak, which I recognized, but it was enough to get me going. I suspect stack overflow also had the answer to that one.
And for fun, I've had copilot generate a monad implementation for a parser type in my own made-up language (similar to Idris/Agda), and it got fairly close.
Its like the car navigation or Google Maps. Annoying and not much useful when in hometown. Very helpful when traveling or in unfamiliar territory.
At first it was really cool getting an understanding of what it can do. It can be really powerful, especially for things like refactoring.
Then, I found it to be in the way. First, I had to rebind the auto-insert from TAB to ctrl+space because I would try tabbing code over and blamo: lines inserted, resulting in more work deleting them.
Second, I found that I'd spend more time reading the ai generated autocomplete that pops up. It would pop up, I'd shift focus to read what it generated, decide if it's what I want, then try to remember what the hell I was typing.
So I turned it all off. I still have access to context aware chats, but not the autocomplete thing.
I have found that I'm remembering more and understanding the code more (shocking). I also find that I'm engaging with the code more: taking more of an effort to understand the code
Maybe some people have the memory/attention span/ability to context switch better than me. Maybe younger people more used to distractions and attention stealing content.
I feel like what I felt with adaptive cruise control.
Instead of watching my speed, I was watching traffic flow, watching cars way up ahead instead.
The syntax part of my brain is turned off, but the "data flow" part is 100% on when reading the code instead.
As a result I've never found adaptive cruise control (or self-driving) to be all that big a deal for me. But hearing your perspective suddenly makes me realize why it is so compelling for so many others.
If the design speed of your roads is a safe speed for those around you then yeah that works perfectly.
The more gentle autoformatters actually do their job correctly, but the more aggressive ones make code harder to read. And BTW, I hate golang with a passion. It's a language designed to get fifty thousand bootcamp grads from developing countries to somehow write coherent code. I just don't identify with that, although I do understand why it needs to exist.
Since we've had Claude Code for a few months I think our opinions have shifted in the opposite direction. I believe my preference for autocomplete was driven by the weaknesses of Chat/Agent Mode + Claude Sonnet 3.5 at the time, rather than the strengths of autocomplete itself.
At this point, I write the code myself without any autocomplete. When I want the help, Claude Code is open in a terminal to lend a hand. As you mentioned, autocomplete has this weird effect where instead of considering the code, you're sort of subconsciously trying to figure out what the LLM is trying to tell you with its suggestions, which is usually a waste of time.
On the other hand I love cursor's autocomplete implementation. It doesn't just provide suggestions for the current cursor location, it also provides suggestions where the cursor should jump next within the file. You change a function name and just press tab a couple of times to change the name in the docstring and everywhere else. Granted, refactoring tools have done that forever for function names, but now it works for everything. And if you do something repetitive it picks up on what you are doing and turns it into a couple quick keypresses
It's still annoying sometimes
I agree autocomplete kinda gets in the way, but don’t confuse that with all AI coding being bad, they’re 2 totally distinct functions.
The only time it helps is when I have several similar lines and I make a change to the first line it offers to change all the rest of the lines. It's almost always correct, but sometimes it is subtlety not and then I waste 5 minutes trying to figure out why it didn't work only to notice the subtle bug it introduced. I'm not sure how anyone thinks this is somehow better than just knowing what you're doing and doing it yourself.
What you can do is create a hotkey to toggle autocomplete on and off.
If you keep on refining the prompts, you are just eating the hype that is designed to be sold to C Suite.
AI is just another tool, use it or turn it off. it shouldn't matter much to a developer.
Today my teammate laughed off generating UI components to quickly solve a ticket. Knowing full well no one will review the ticket now that it’s Llm generated and that it will probably make our application slower because of the unreviewed code gets merged. The consensus is that anything they make worse, they can push off to fix onto me because I’m the expert on our small team. I have been extremely vocal about this. However It is more important to push stuff through for release and make my life miserable than make sure the code is right.
Today I now refuse to fix anymore problems on this team and might quit tomorrow. This person tells me weekly they always want to spend more time writing and learning good code and then always gets upset when I block a PR merge.
Today I realized I might hate my current job now. I think all Llms have done is enabled my team to collect a pay check and embrace disinterest.
Don't quit. Get fired instead (strictly without cause). In this way you can at least collect some severance and also unemployment. You will also absolve yourself of any regrets for having quit. Actually, just keep doing what you're doing, and you will get fired soon enough.
The other thing you can try is to ask for everyone to have their own project that they own, and for the assigned owner be fully responsible for it, so you can stop reviewing the work of other people.
If you're not in step with where you're at, and you can find other employment where you'll be happier, why not change?
You could apply your same logic to, "If you're in a relationship with a significant other, don't break up with them... get them to break up with you! You will absolve yourself of any regrets of dumping them." Yes, and you will have wasted both your time, and their time.
And the same goes for working at a company that you feel isn't good for you.
If it's a new problem, you need to write the code so that you discover all the peculiar corner cases and understand them.
If it's the (N+M)th time, and you've been using AI to write the code for the last M times, you may find you no longer understand the problem.
Fair warning. Write the damn code.
Spend more time on interfaces and test suites. Let the AI toil away making the implementation work according to your spec. Not implementing the interface is a wrong answer, not passing the tests is a wrong answer.
If you've worked in software long enough you will have encountered people who are uninterested in learning or uncoachable for whatever reason. That is all of the LLMs too. If the LLM doesn't get it, don't waste your time; it will probably never get it. You need to try a different model or get another human involved, same as you would for an incompetent and uncoachable human.
As an aside: my advice to junior engineers is to show off your wetware, demonstrate learning and adaptation at runtime. The models can't do that yet.
When switching context in any way, I start a new prompt.
"From the perspective of Senior / Staff level engineer, what is good about this code"
Does it praise it?
Taking a step back and reviewing all my changes gives a different perspective and often find things I didn’t see when in the weeds.
> Write the initial version yourself and ask AI to review and improve it.
> Write the critical parts and ask AI to do the rest.
> Write an outline of the code and ask AI to fill the missing parts.
So well put. I'm writing these on a post it note and putting it above my monitor. I held off on using agents to generate code for a long time and finally was forced to really make use of them and this is so in line with my experience.
My biggest surprises have been how much the model doesn't seem to matter (?) when I'm making the prompts appropriately narrow. Also surprised at how hard it is to pair program in something like cursor. If your prompting is even slightly off it seems like it can go from 10xing a build process to making it a complete waste of time with nothing to show but spaghetti code at the end.
Anyway long live the revolution, glad this was so technically on point and not just a no-ai rant (love those too tho).
I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...
Kinda the opposite advice from the blog. :-)
Edit: Somebody pointed out that, in order to read/review code, you have to write it. Very true. It brings a questions of how do you acquire/extend your skills in the age of AI-coding assistance? Not sure I have an answer. Claude Code now has /output-style: Learning, which forces you to write part of the code. That's a good start.
sure thing. we've been '6 months' away from AI taking our jobs for years now
Also, it looks like the OpenAI and Anthropic has completed their fundraising cycles. So the AGI "has been cancelled" for now. :-)
I'm not saying that it definitely isn't going to happen, but there is a loooong way to go for non-FAANG medium and small companies to let their livelihoods ride on AI completely.
>I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...
If we get to a point in 1-2 years where AI is vibe-coding at a high mostly error-free level, what makes you think that it couldn't review code as well?
AI-assistance is a multiplier, not an addition. If you have zero understanding before AI, you will get zero capabilities with AI.
We over-analyse, over discuss, over plan and over optimize before we even write the first import or include.
Some of my best ideas came to me as I was busy programming away at my vision. There's almost a zen like state there
the end
My experience is that treating the generated code as a Merge Request on which you submit comment for correction (and then again for the next round) works fairly well.
Because the AI is bad you get more rounds than in a real code review, but because the AI is fast and in your command each round is way faster than with a code review with a human (< 10 minutes feedback loop).
This is terrible advice.
Why would I go through the write, run, debug loop myself when I can just have cc do it?
This has helped to explain why, at least for me, LLMs have been more useful for reading code than writing code. I’m also just reluctant to submit the code it’s written on my behalf without making hundreds of small adjustments, but I think I’ll need to get over that, as I wouldn’t be so nit-picky a junior engineer were completing the task.
If the context window is full, better save the progress by reformulating the problem and prompt the AI with the progress you made before. A new start with an empty context.
In my experience any coding AI starts with really good code and it goes down from there.
https://en.m.wikipedia.org/wiki/Decomposition_(computer_scie...
3 citations..
Studied CS in college, got my degree, went and worked for the last 8 years (in software) and i have never once heard decomposition used in anyway during my time in college and out until this very moment.
manoDev•4mo ago
Apparently what the article talks against is using it like software factory - give it a prompt of what you want and when it gets it wrong, iterate on the prompt.
I understand why this can be a waste of time: if programming is a specification problem [1], just shifting from programming language to natural language doesn’t solve it.
1. https://pages.cs.wisc.edu/~remzi/Naur.pdf
lukevp•4mo ago
So yes you have to specify things but there’s a lot more implicit understanding and knowledge that can be retrieved relevant to the task you’re doing than a regular language would have
lomase•4mo ago
Can you show it to us?