After using Cursor intensively for the better part of a year, I'm stunned by how fast it is. It can scaffold entire features, wire up components, and write complex logic in seconds. The feeling is like the difference between driving a car with a manual versus an automatic transmission. Or maybe, more accurately, like the difference between reading detailed documentation versus just watching a summary video.
It's brought me back to when I first started using GitHub Copilot in 2023. Back then, it was mostly for autocompleting methods and providing in-context suggestions. That level of assistance felt just right. For more complex problems, I'd consciously switch contexts and ask a web-based AI like ChatGPT. I was still the one driving.
But tools like Cursor have changed the dynamic entirely. They are so proactive that they're stripping me of the habit of thinking deeply about the business logic. It's not that I've lost the ability to think, but I'm losing the ingrained, subconscious behavior of doing it. I'm no longer forced to hold the entire architecture in my head.
This is leading to a progressively weaker sense of ownership over the project. The workflow becomes:
Tell the AI to write a function.
Debug and test it.
Tell the AI to write the next function that connects to it.
Rinse and repeat. While fast, I end up with a series of black boxes I've prompted into existence. My role shifts from "I know what I'm building" to "I know what I want." There's a subtle but crucial difference. I'm becoming a project manager directing an AI intern, not an engineer crafting a solution.
This is detrimental for both the individual developer and the long-term health of a project. If everyone on the team adopts this workflow, who truly understands the full picture?
Here’s a concrete example that illustrates my point perfectly: writing git commit messages.
Every time I commit, I have a personal rule to review all changed files and write the commit message myself, in my own words. This forces me to synthesize the changes and solidifies my understanding of the project's state at that specific point in time. It keeps my sense of control strong.
If I were to let an AI auto-generate the commit message from the diff, I might save a few minutes. But a month later, looking back, I’d have no real memory or context for that commit. It would just be a technically accurate but soulless log entry.
I worry that by optimizing for short-term speed, we're sacrificing long-term understanding and control.
Is anyone else feeling this tension? How are you balancing the incredible power of these tools with the need to remain the master of your own codebase?
muzani•6mo ago
This is the exact same feeling I got when I was coding things before AI. There's a meme we had at the office where someone runs git blame on the shitty code and realizes they wrote it. We tried to have people do tech talks about the hardest things they had to build, but most people don't even remember it after 6 months.
I think when people are in flow, they act possessed. It's not even muscle memory forgetting. They make the same mistake. They add a line that fixes Bug A but causes Bug B. They then remove the line causing Bug B, and Bug A regresses. Then they'll scratch their heads wondering why Bug A is still there.
I have the opposite experience with vibe coding. I know where the models are. I know what's in the DB, the tables, every migration, even though I focus on FE. I can tell AI where the files are, before it loads the first prompt.
I know when it's creating memory leaks, whereas I usually miss my own leaks because I'm in the code and forget obvious things like destroying a thread or coroutines running in the wrong space.
Shaun0•6mo ago
But there's a crucial difference between forgetting the "what" (the specific lines of code) and forgetting the "why" (the architectural trade-offs and business reasons). My concern is that AI-driven development accelerates the forgetting of the "why."
muzani•6mo ago
If you do want to use a LLM for brainstorming, design, etc, it should be done before the coding. Like humans, they're not great at doing both at the same time.
The models we have now are just very poor with setting up things from scratch. I think vibe code leans to frameworks like Expo because these are a little more stable without needing any architectural work, but they're rigid and limited.
In mobile, we always use a pattern like view-viewmodel-repository-service. There's little reason to use less for an app with lots of async (e.g. web apps). There's also patterns like design systems and putting all your themes and colors in one box. But the AI will almost never set up this pattern. It's well aware of it and it'll seamlessly when you tell it to do it. But you have to set it up first.
therealpygon•6mo ago
Plus, these are the kinds of AI posts that always make me laugh. They are like saying that calculators are bad because if you only punch in calculations and don’t think about them, you didn’t learn how to do math.
If you don’t know what your code does when you use AI, it is a poor reflection on you as a developer who didn’t understand any of the code before committing it to the project.
What makes it more humorous is that I guarantee most of those same people don’t have a clue about the code in all the packages they pull in and use, but suddenly the AI is to blame for willfully having not understood your own code.