I am seeing a lot of folks talking about maintaining a good "Agent Loop" for doing larger tasks. It seems like Kilo Code has figured it out completely for me. Using the Orchestrator mode I'm able to accomplish really big and complex tasks without having to design an agent loop or hand crafting context. It switches between modes and accomplishes the tasks. My AGENTS.md file is really minimal like "write test for changes and make small commits"
Instead, I'll ask Cursor to refactor code that I know is inefficient. Abstract repetitive code into functions or includes. Recommend (but not make) changes to larger code blocks or modules to make them better. Occasionally, I'll have it author new functionality.
What I find is, Cursor's autocomplete pairs really with with the agent's context. So, even if I only ask it for suggestions and tell it to not make the change, when I start implementing those changes myself (either some or all), the shared context kicks in and autocomplete starts providing suggestions in the direction of the recommendation.
However, at any time I can change course and Cursor picks up very quickly on my new direction and the autocomplete shifts with me.
It's so powerful when I'm leading it to where I know I want to go, but having enormous amounts of training data at the ready to guide me in best-practices or common patterns.
I don't run any .md files though. I wonder what I'm missing out on.
Gone are the days of exhausting yourself by typing full requests like "refactor this function to use async/await." Now, simply type "refac—" and let our AI predict that you want an AI to refactor your code.
It's AI all the way down, baby.
I'm currently training local LLMs on data derived by small movements of my body, like my eyes and blinking patterns, in order to skip the keyboard altogether and enter a state of pure vibe.
In fact, this entire response was written by an LLM trained on my controlled flatulence in order to respond to HN posts.
The builders are quietly learning the tools, adopting new practices and building stuff. Everyone else is busy criticizing the tech for its shortcomings and imperfections.
It's not a criticism of AI, broadly, it's commentary on a feature designed to make engineers (and increasingly non-engineers) even lazier about one of the main points of leverage in making AI useful.
Because that's where the text the devs type still matters most.
Do I care significantly about this feature's existence, and find it an affront to humanity? No.
But, people who find themselves using auto-complete to make even their prompts for them will absolutely be disintermediated, so I think it wise to ensure people understand that by making funny jokes about it.
Caught Claude 4.5 via Cursor yesterday trying to set a password to “password” on an outward facing EC2 service.
See https://www.jetbrains.com/help/ai-assistant/use-custom-model...
I suppose this is by design so you don't know how much you have left and will need to buy more credits.
I always preferred the deep IDE integration that Cursor offers. I do use AI extensively for coding, but as a tool in the toolbox, it's not always the best in every context, and I see myself often switching between vibe coding and regular coding, with various levels of hand-holding. And I do also like having access to other AI providers, I have used various Claude models quite a lot, but they are not the be-all-end-all. I often got better results with o3 and now GPT-5 Thinking, even if they are slower, it's good to be able to switch and test.
I always felt that the UX of tools like Claude Code encourage you to blindly do everything through AI, it's not as seamless to dig-in and take more control when it makes sense to do so. That being said, they are very similar now, they all constantly copy each other. I suppose for many it's just inertia as well, simply about which one they tried first and what they are subscribed to, to an extent that is the case for me too.
Do people think there are better autocomplete options available now? Is it a case of just using a particular model for autocomplete in whatever IDE you want to use?
1) The most useful thing about Cursor was always state management of agent edits: Being able roll back to previous states after some edits with the click of a button, or reapply changes, and preview edits, etc. But weirdly, it seems like they never recognized this differentiator, and indeed it remains a bit buggy, and some crucial things (like mass-reapply after a rollback) never got implemented.
2) Adding autocomplete to the prompt box gives me suspicion they somehow still do not understand best practices in using AI to write code. It is more crucial than ever to be clear in your mind what you want to do in a codebase, so that you can recognize when AI is deviating from that path. Giving the LLM more and earlier opportunities to create deviation is a terrible idea.
3) Claude Code was fine in CLI and has a nearly-identical extension pane now too. For the same price, I seem to get just as much usage, in addition to a Claude subscription.
I think Cursor will lose because models were never their advantage and they do not seem to really be thought leaders on LLM-driven software development.
qsort•57m ago
Again, I haven't used Cursor in a while, I'm mostly posting this hoping for Cunningham's Law to take effect :)
anthonypasq•49m ago
idk seems worth it to me. If youre shelling out on one of the $200 plans maybe its not as worth it, but it just seems like the best all in one ai product out there.
chermi•47m ago
jermaustin1•46m ago
I wouldn't even bother with it, but my MCP coding tool I built uses Claud Desktop and is for windows only, and my laptop is MacOS. So I'm using Cursor, and it is WAY WORSE than my most simple of MCP servers (that literally just does dotnet commands, filesystem commands, and github commands).
I think having something that is so general like cursor causes the editor to try too many things that are outside what you actually want.
I fought for 2 hours and 45 minutes while Sonnet-4 (which is what my MCP uses) kept inventing worse ways to implement OpenAI Responses using the OpenAI-dotnet library. Even switching to GPT-5 didn't help. Adding the documentation didn't help. I went to claude in my browser, pasted the documentation, and my class I wanted extended to use Responses, and it finished it in 5 minutes.
The Cursor "special-sauce" seems to be a hinderance now-days. But beggars can't be choosers, as they say.
jtrn•40m ago
Claude code is more reliable and generally better at using MCP for tool cal, like docs from contex7. So if I had only one prompt and it HAD to make something work, Claude code would be my bet.
Personally I like jumping between models and IDEs , if only to mix it up. And you get a reminder of different ways of doing stuff.