In your model, I give enough guidance to generally know what AI is doing, and AI is finishing what you started.
It's supposed to be consumed by LLMs to help prepare them to provide better examples - maybe a newer version of a library than is in the model's training data for example.
I've often thought rather than an MCP server of this that my LLM agent can query, maybe i just want to query this high signal to noise resource myself rather than trawl the documentation.
What additional value does an LLM provide when a good documentation resource exists?
I've also used it in the past to look up windows api, since I haven't coded for windows in decades. (For the equivalent of pipe, fork, exec.) The generated code had a resource leak, which I recognized, but it was enough to get me going. I suspect stack overflow also had the answer to that one.
And for fun, I've had copilot generate a monad implementation for a parser type in my own made-up language (similar to Idris/Agda), and it got fairly close.
At first it was really cool getting an understanding of what it can do. It can be really powerful, especially for things like refactoring.
Then, I found it to be in the way. First, I had to rebind the auto-insert from TAB to ctrl+space because I would try tabbing code over and blamo: lines inserted, resulting in more work deleting them.
Second, I found that I'd spend more time reading the ai generated autocomplete that pops up. It would pop up, I'd shift focus to read what it generated, decide if it's what I want, then try to remember what the hell I was typing.
So I turned it all off. I still have access to context aware chats, but not the autocomplete thing.
I have found that I'm remembering more and understanding the code more (shocking). I also find that I'm engaging with the code more: taking more of an effort to understand the code
Maybe some people have the memory/attention span/ability to context switch better than me. Maybe younger people more used to distractions and attention stealing content.
I feel like what I felt with adaptive cruise control.
Instead of watching my speed, I was watching traffic flow, watching cars way up ahead instead.
The syntax part of my brain is turned off, but the "data flow" part is 100% on when reading the code instead.
Since we've had Claude Code for a few months I think our opinions have shifted in the opposite direction. I believe my preference for autocomplete was driven by the weaknesses of Chat/Agent Mode + Claude Sonnet 3.5 at the time, rather than the strengths of autocomplete itself.
At this point, I write the code myself without any autocomplete. When I want the help, Claude Code is open in a terminal to lend a hand. As you mentioned, autocomplete has this weird effect where instead of considering the code, you're sort of subconsciously trying to figure out what the LLM is trying to tell you with its suggestions, which is usually a waste of time.
I agree autocomplete kinda gets in the way, but don’t confuse that with all AI coding being bad, they’re 2 totally distinct functions.
If you keep on refining the prompts, you are just eating the hype that is designed to be sold to C Suite.
AI is just another tool, use it or turn it off. it shouldn't matter much to a developer.
Today my teammate laughed off generating UI components to quickly solve a ticket. Knowing full well no one will review the ticket now that it’s Llm generated and that it will probably make our application slower because of the unreviewed code gets merged. The consensus is that anything they make worse, they can push off to fix onto me because I’m the expert on our small team. I have been extremely vocal about this. However It is more important to push stuff through for release and make my life miserable than make sure the code is right.
Today I now refuse to fix anymore problems on this team and might quit tomorrow. This person tells me weekly they always want to spend more time writing and learning good code and then always gets upset when I block a PR merge.
Today I realized I might hate my current job now. I think all Llms have done is enabled my team to collect a pay check and embrace disinterest.
Don't quit. Get fired instead. In this way you can at least collect severance and also unemployment. You will also absolve yourself of any regrets for having quit. Actually, just keep doing what you're doing, and you will get fired soon enough.
manoDev•56m ago
Apparently what the article talks against is using it like software factory - give it a prompt of what you want and when it gets it wrong, iterate on the prompt.
I understand why this can be a waste of time: if programming is a specification problem [1], just shifting from programming language to natural language doesn’t solve it.
1. https://pages.cs.wisc.edu/~remzi/Naur.pdf
lukevp•7m ago
So yes you have to specify things but there’s a lot more implicit understanding and knowledge that can be retrieved relevant to the task you’re doing than a regular language would have