Like, I'm publishing https://github.com/andrewmcwattersandco/git-fetch-file right now with Claude Sonnet 4 (thank you for recently upvoting that to the front page). And the whole repository view that GitHub Copilot and Claude Sonnet 4 have on my projects seems like the same exact thing you get in Cursor, but Cursor for some reason took longer with the exact same models, and I'm not sure why.
Maybe they prompt the models differently? I haven't taken a look.
Also, Cursor seems to be literally a Visual Studio Code fork! But everyone's talking about it lately, and no one is mentioning this. I don't understand.
Cursor seems like the weakest player of the three, because it's just a Visual Studio Code fork.
I haven't used VSC in a year or Cursor at all, but I hear similar things from colleagues.
So I'm using CC in cursor (the little integration is nice) to get the best of both. None of cursors other AI features are helping though.
It's terrible. For comparison, I've only used cursor on greenfield toy projects, but cursor is way better at the agentic stuff (the actual code generation AND the "review these changes" workflow) AND the tab/auto-complete stuff.
I hope Junie can make some leaps because I really like JetBrainz and dont want to see them fall behind
It looks like chat-based agentic editing like this is going to be table stakes for AI-assisted editing moving forward.
Kiro, Void, Windsurf, Cline, Kilo, ... many, many others.
Couldn't you make the same argument around something like S3? How many companies are basically S3 wrappers? Or companies that use general AWS infra and make it slightly better. There could still be a market for add on products. Why would Claude or OpenAI want the headache of managing an IDE? They're okay giving up some margin there.
I agree there is a huge rush of "AI wrapper" companies, whose moat is basically prompt engineering. Like a "AI buddy" or whatever. Those are all going to zero IMO. But things like Cursor have a future. Maybe not at the hyped valuation but long term something like this will exist
There's a lot of talk around economics. What is going to be more economic than a provider building abstractions/margin-optimizations around the tokens, and shipping directly to consumer. Vs token arbitrage.
Lastly, there's a lot of industry hype and narrative around agents. In my opinion, Claude Code is really the only effective & actual agent; the first born. It shows that Anthropic is a signaling that the leading providers will no longer just train models. They are creating intelligent capabilities within the post training phases / in RL. They are shipping the brain and the mech suit for it. Hence, eat the stack. From terminal to desktop, eventual robotics.
The strongman counter-argument would be that specialized interfaces to AI will always require substantial amounts of work to create and maintain.
If true, then similar to Microsoft, it might make more financial sense for Anthropic et al. to cede those specialized markets to others, focus on their core platform product, take a cut from many different specialized products, and end up making more as the addressable market broadens.
The major AI model providers substantially investing in specialized interfaces would suggest they're pessimistic about revolutionary core model improvements and are thus looking to vertically integration to preserve margin / moat.
But relatively speaking, it doesn't seem like interfaces are being inordinately invested in, and coding seems such an obvious agentic target (and dogfoodable learning opportunity!) that it shouldn't prompt tea leaf reading.
I think it instead (or also?) shows a related but orthogonal signal: that the ability and resources to train models are a strong competitive advantage. This is most obvious with deep research and I haven’t seen any wrapper or open source project achieve anywhere near the same quality as Gemini/Claude deep research, but Claude Code is a close runner up.
But now it's subsidized so I easily spend over $50 of Claude credits for my $20 in Cursor.
Also the ability to swap out models is a big value add and I don't have to worry about latest and greatest. I switch seamlessly. Something comes out, next day its on Claude. So now I'm using GPT which is less than half the price. I don't want to have to think about it or constantly consider other options. I want a standardized interface and plug in whatever intelligence I want. Kind of like a dropbox that can worry about whether they store in AWS, Azure or GCP depending which one is the best value prop.
I’m trying to imagine a graph where at some point in time t
the status of a company changes from “wrapper” (not enough “original” engineering)
to “proper company” (they own the IP, and they fought for it!!!)
At what point did OpenAI cease being an NVIDIA wrapper and become the world’s leading AI lab? At what point did NVIDIA graduate from being a TSMC wrapper?
Clearly any company that gets TSMC N2 node allocation is going to win, the actual details of the chip don’t matter super much.
I think you can think of it as how long would it take someone to come up with the product given enough information about the product.
Take for instance an app that is a "companion" app. It's simplest form is a prompt + LLM + interface. They don't own the LLM so they have the prompt and interface. The prompt is simple enough to figure out (often by asking the app in a clever way) so the interface is left. How easy is it to replicate? If it's like chatgpt, pretty easy.
Now there are a few complications. Suppose there are network effects (Instagram is a wrapper around a protocol), but the network effects are the value. And LLM wrapper can create network effects (maybe there is a way to share or something) but difficult.
OpenAI is not a wrapper on NVIDIA because it would take billions of dollars to train the LLM with the NVIDIA chips (in energy). It would take me a weekend to recreate a GPT wrapper or just fork an open source implementation. There is also institutional knowledge (which is why Meta is offering 1bn+ for a single eng). Or take something like Excel. People know how it works, people have dissected it endlessly. But the cost to recreate even with perfect knowledge is very high, plus there is network effects.
I think this might be more narrow than most uses of the term “wrapper” though.
https://news.ycombinator.com/item?id=44424456
Make Fun of Them (42 days ago, 43 comments)
Specifically the product value compared to the operating cost.
Now, if the tool (Claude Code) really is very valuable, and Cursor is just a very good integration, and they manage to guard their moat (brand, subscription, glue code), maybe there's something to it.
I'm not a businessperson, like I said, just immediately reminded me of that post I read on the weekend.
This logistical leg is where most of the work is done since you have to:
Maintain large slush funds for bribing law enforcement.
Run workshops and technicians that strip civilians cars, embed cocaine in the nooks and hand them over to American civilians to drive over the border.
Hire engineers from Pakistani universities to build narco-submarines in riverine deltas, which are then used to cross the Atlantic for European supplies.
Maintain contact with your African coastal syndicates who have another trans-Saharan route for getting drugs into Europe.
Run payroll for your workforce (this is a business after all).
Maintain a decently trained fighting force to slaughter enemies that encroach on your turf. Or informants, uncompromising cops, politicians, etc. This includes training, paying, initiating them, and hiring good experienced fighters. Right now, it's credibly reported that Mexican cartels are volunteering to fight in Ukraine to gain experience with drones and other UAVs to expand their war-making capabilities.
Hire chemistry undergrads from local STEM universities to turn synthetic precursors from Asia into fentanyl, etc.
So, just like African cocoa farmers and American growers see just a tiny slice of the profit the end-products produce, the cartels are in the logistics & firepower business; they've outsourced a huge chunk of the business to growers, just like their peers in the chocolate and grocery business.
I find pure claude and neovim to be a great pair. I set up custom vim commands to make sharing file paths, line numbers, and code super easy. that way I can move quickly through code for manual developing as well as have claude right there with the context it needs quickly.
I don't think there's a human on this planet who can even predict the state of the industry in 3 years. In my entire time in the industry, I have always felt like I had a good line of sight three years away. Even when the iphone came on the scene, it felt like a generational increase rather than a revolution.
We just have no idea. We don't know the extent of how it can improve. We don't know if we are still on exponential improvement or the end of the S curve. We don't know what investment is going to be like. We don't know if there is autonomy in the future. We don't know if it's going to look more like the advancement of autonomous vehicles where everyone thought we were just a year or two away from full autonomy - or at least people bought the hype cycle.
Any anyone who says they know has something to sell you.
Can’t wait for AI 2.0 and ads :(
But like the OP said, we can’t even predict what’s going to happen even three years out so I’ve just resigned myself to “going with the flow” and enjoying the ride as much as I can. If the negative consequences are coming for us, might as well get as much benefit as we can now while we’re all still wide eyed and bushy tailed.
I see it as a chance for the capital class to sell everyone shovels and build railroads that will further cement their power and influence, all the while insisting software and art are more democratic than ever. All the while using the same tools to build surveillance infrastructure that will make any dissent impossible.
So yeah, exciting is one word you could choose.
The trick I've always used in these circumstances is the cynical approach. That is assume nothing changes. If it does change, adopt late, rather than burn all your time, money and energy on churn and experimentation.
In the last 35 years of doing this, I've seen perhaps 10% of technology actually stick around for more than a few years. I'll adopt at maturity and discard when it's thoroughly obsolete.
Being fintech, no technology so far has fundamentally changed what business we do or even how it's done for a long time even if we pretend it does. A lot of the changes have just been a cost naively written off through arbitrary justification or keeping up with trends. 99% of what we do is CRUD, shit reports and batch processing, just like it was when it was S/390.
Even fewer things have had an ROI or a real customer benefit. Then again we have actual customers not investors.
Don't kid yourself. Skepticism is not neutrality. These days throwing shade is a growth industry. There's money to be made shorting just like there is going long. Neither is the objective, disinterested position, although skepticism always enjoys the appearance of prudence, at least to the ignorant.
Anyone who says they're not trying to sell you something lacks self-awareness about what they themselves have been sold.
At this point we have half the industry being overly pessimistic and the other half being unreasonably optimistic. The median truth would be in the middle. But I don't think that's a very sound position to take either.
The reason is that I think we're actually dealing with a severe imagination deficit in society. That always happens around big technological changes. And this definitely looks and feels like such a thing. Ten years from now it might all seem obvious in retrospect. But right now we have the optimist camp predicting what boils down to the automotive equivalent of "faster horses" (AGI, I robot, self driving cars, and all the rest). It's going to be this wonderful utopia where no-one works and everything runs by itself. I'm not a big believer in that and I don't think that's how economies work.
And we have a bunch of pessimists predicting that it's all going to end in tears. Dystopia, everybody is going to be unemployed, and a lot of other Luddite nonsense.
The optimists basically lack imagination so they just reach for what science fiction told them is going to happen (i.e. rely on other people's science fiction). And then the pessimists basically are stuck imagining the worst always happens and failing to imagine that there might be things that actually do work.
It's fairly easy to predict/bet that both sides are probably imagining things wrong. Just like people did three years ago. Including myself here. So, not making a prediction here. But, kind of curious to see how the next few years will unfold. Lots of amazing stuff in the past three. I'll have some more of that please.
also, yes, the labs control the supply but also there are many labs so there's lots of competition. they can't, for example, just jack up the prices on the dealers (apps) like a monopoly could. so again, not sure if being a dealer is actually bad here.
They say there is no moat, but in fact, a feature in anthropic takes a good few months up to a year to appear on openai chatapp, and the same is true vice versa.
You could say some of those issues are solvable by allocating more money, and resources, which might be true, and it could be true that it would be beneficial for openai to develop their own cursor platform in the future, to get better margins. But in reality, who knows when that future would come? Maybe by then cursor will have much more moat and entering the market would be much more difficult. Maybe openai will continue developing their core product and entering other domains will not be worth the effort.
Currently, LLMs as a product have not been solved. All companies operate at a lose in order to rise the top, and we still don't know how it will be monetized in the future. But as it stands - there is already moat, moat in infrastructure - even though a few years ago they said that llms have no moat, now there is already a strong set of features and "agents" that deliver us the deep reasoning, online searching, and multimodal experience.
So, there is moat. But moat can accumulate over time. For the article to be true - it should prove the the current moat is low, and it can not accumulate.
I have a question for you
will you be able to pass the exam at the midterm?
Calculus or programming or advanced algebra etc are nowhere near the same difficulty, and the same rules don't apply.
And yes, I memorized a bunch of integrals for my calculus class and then promptly forgot them all after the final exam. It's not worth it to remember how to do integrals other than polynomials.
LLMs are a huge waste of time
You’re crazy to only use one AI service if you’re doing serious development.
Use the 3 big ones all at the same time.
Ask them all to solve the same problem. Ask them all to evaluate each others solutions. Do this over and over in multiple iterations.
Each model is good at different things.
When you’re not getting a great result with this one, switch to another.
Using one AI is crazy when three together are more powerful.
empath75•2h ago
However, they will eventually get purchased by an AI company because the _product_ is great.
CharlesW•1h ago
'Great' is in the eye of the beholder. For me, Cursor was one of the least-effective solutions of the many options (from Cursor and other AIDEs, to repo-centric web-based options like Jules, to CLI-based options like Claude Code) I evaluated a few months ago.
damon_c•1h ago
"Revert the changes to <file>..." 4 zillion tokens... 10 seconds...
Instead of > git checkout <file>
just to keep Cursor in the loop.
I assume I have probably eaten up my $20/month in tokens just on stuff like that.
verdverm•14m ago
git checkout would destroy this (and "corrupts" the Copilot session state)