I was watching the CEO of that Chad IDE give an interview the other day, they are doing the same thing as amp just with "brain rot" ads/games etc instead (different segment, fine), they are using that activity to produce value (ads/game usage or whatever) for another business such they can offset the costs of more expensive models. (I presume this could extend out into selling data also, but I don't know they do that today)
Congrats to the amp team on all the success btw, all I hear is great things about the product, good work.
I run every single coding prompt through Codex, Claude Code, Qwen and Gemini, compare which one gives me the best and go ahead with using that one. Maybe I go with Codex 60% of the times, Claude 20% and Qwen/Gemini the remaining 20%, not often at all either of them get enough right. I've tried integrating Amp into my workflow too, but as mentioned, too expensive. I do think the grass is currently the greenest with Codex, still.
That aligns with my annecdata :)
Because if I understand them correctly, aren’t they a wrapper around all the major LLMs (focused specially on developer use cases?
My intuition is that it's not deep... the differentiating factor is "regular" (non LLM) code which assembles the LLM context and invokes the LLM in a loop.
Claude/Codex have some advantage, because they can RLHF/finetune better than others. But ultimately this is about context assembly and prompting.
The same thing is going to happen with all of the human language artifacts present in the agentic coding universe. Role definitions, skills, agentic loop prompts....the specific language, choice of words, sequence, etc really matters and will continue to evolve really rapidly, and there will be benchmarkers, I am sure of it, because quite a lot of orgs will consider their prompt artifacts to be IP.
I have personally found that a very high precision prompt will mean a smaller model on personal hardware will outperform a lazy prompt given to a foundation model. These word calculators are very very (very) sensitive. There will be gradations of quality among those who drive them best.
The best law firms are the best because they hire the best with (legal) language and are able to retain the reputation and pricing of the best. That is the moat. Same will be the case here.
You might get an 80% “good enough” prompt easily but then all the differentiation (moat) is in that 20% but that 20% is tied to the model idiosyncrasies, making the moat fragile and volatile.
I love that I am not welded to one model and someone smart has evaluated what’s best fit for what. It’s a bit a shame they lost Steve Yegge as their brand ambassador. A respected and practicing coder is big endorsement.
To anyone on the fence - give it a go.
tonictato•6d ago
Amp was built by Sourcegraph, so I assume all investors and employees of Sourcegraph now get equity in Amp?
foota•12h ago
jeeyoungk•11h ago
the good thing is that you afterwards the cap table of the subsidiary or the spunoff can evolve (ex: waymo / amp can raise money independent of the parent company).
Touche•9h ago
foota•9h ago
traceroute66•12h ago
Spin-off is where the parent company creates a subsidiary and distributes the shares in the subsidiary to the existing shareholders. So the shareholders end up holding shares of two companies. Share allocation is done on a pro-rata basis so each shareholder still has the same exposure they did before.