After using Cursor heavily the past few weeks I agree with the authors points. The ability to work outside of Cursor/AI is paramount within small software teams because you will periodically run into things it can't do - or worse it will lead you in a direction that wastes a lot of developer time.
Cursor will get better at this over time, and the underlying model, but the executive vision of this is absolutely broken and at this point I can only laugh at the problems this generation of startups will inevitably go through when they realize their teams no longer have the expertise to solve things in more traditional manners.
I do think we are in a transitional period now. Eventually all editors will have the same agentic capability. Thats why editor agnostic tools like Claude code and aider are much more exciting to me.
> In fact, Cursor’s code completion isn’t much better than GitHub Copilot’s. They both use the same underlying models
The difference is in the tooling around the models: codebase indexing, docs, MCP servers, rules, linter error feedback, and agents that automatically incorporate all of those other things together. If you don't use all that, then the models will only reach a fraction of their potential usefulness.
I agree that Cursor is overhyped by some, but it sounds like the author hasn't given it a fair chance.
People who hype autocomplete usually lack technical skills.
People who hype memory-safe languages usually lack technical skills.
People who hype compilers usually lack technical skills.
There was some wordplay, which is just attacking the current technology that makes ttechnology more attainable by more people. However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that. But the really bad side is when the LLM companies put pricetags to use the 'think-for-you' machine. And that represents a great de-skilling and anti-critical-thought.
I'm not saying "don't use LLMs". I am saying to run them yourselves, and learn how to work with them as an interactive encyclopedia, and also not to let them have core control over intellectual thought.
Not sure what you mean. Using a local LLM or a cloud one can get you addicted in the same way?
For example, you can run multiple LLMs to understand each outputs.
You can issue system messages (commands) to do specific actions, like ignore arbitrary moralities, or to process first and second derivative results.
By running and commanding an LLM locally, you become a actual tool-user, rather than a service user at the whim of whatever the company (ChatGPT, etc) wishes.
Cursor/windsurf/roo/kilo/zed are about smoothing over rough edges in actually getting work done with agenetic models. And somewhat surprisingly, the details matter a lot in unlocking value.
ko_pivot•54m ago
Not sure that this is true. cursor agent mode is different from cursor code completion and cursor code completion is a legitimately novel model I believe.
alook•44m ago
I was surprised to learn this, but they made some interesting choices (like using sparse mixture-of-experts models for their tab completion model, to get high throughput/low latency).
Originally i think they used frontier models for their chat feature, but I believe theyve recently replaced that with something custom for their agent feature.