Basically the same as MS & Social Media did: build a proprietary silo around data, amass enough data, so it will become too big an inconvenience to move away from the first provider.
It's good that the EU has laws now to ensure data interoperability, export & ownership.
One thing we've leaned heavily into was using Langgraph for agentic workflows and it's really opened the door to cool ways you can use AI. These days the way I tell apart an AI "Wrappers" vs "Tools" is what is the underlying paradigm. Most "wrappers" just copy the paradigm of ChatGPT/Claude where you have a conversation with an agent, the "tools" are where you take the ability to generate content and then plug that into a broader workflow.
Probably my single biggest mistake so far with developing LLM tooling so far has been to try to use Langgraph even after inspecting the codebase, because people I thought were smarter than me hyped it up.
Do yourself a favor and just write the plumbing yourself, it's a lot easier than one might think before digging into it, and tool calling is literally a loop passing tool requests and responses back and forth until the model responds, and having your own abstractions will make it a lot easier to build proper workflows. Plus you get to use whatever language you want and don't have to deal with Python.
One example is audio / stem separation, object segmentation.
They're not wrappers, but whatever that is one step deeper down in complexity.
Now it's AI. Only after doing this for 20+ years do I really appreciate that the arduous process and product winnowing that happens over time is the bulk of the value (and the moat, when none other exists).
The difference is that with AI they will send your data to a third party.
I think of wrapper more as a very thin layer around. Thin layer is easy to reproduce. I do not question that a smart collection of wrappers can do great product. Its all about idea :)
However its if ones idea is based purely on wrappers there's really no moat, nothing stopping somebody else to copy it within a moment
A Large Language Model is just a Large Hadron Model with better marketing.
This is the greatest advertising opportunity since the invention of cereal. We have six identical companies making six identical products. We can say anything we want.
Software is what makes inference valuable because it builds a workflow that transforms tokens and data into practical benefits.
Look at the payment plans for Lovable, Figma Make, Claude Code. None of them charge by token. They charge by obfuscated 'credits'. We don't know the current credit economics, but it is certain that the credit markup will increase and probably eventually reach 10x of the token cost. Users will gladly pay for it because again, tokens do nothing for them. It is the Claude Code, Figma Make products that make them productive.
In many markets, transparency wins. Think of Carfax or banking fees or airbnb pricing for example, when regulators or competitors force clarity, buyers benefit and trust grows.In a functioning government that serves the people (regardless of party) we would see this
People believe they “need” these AI products partly because they’re saturated in both earned and paid media. In '23 there were nearly 400k articles covering AI. I think we can all safely assume its more now, and when we include financial reporting, quite inescapable.
jgalt212•2mo ago
I'm not sure I agree with this because even though Cursor is pay north of 100% of revenues to Athropic, Anthropic is selling inference at a loss. So if Cursor builds and hosts its own models it still has the marginal costs > marginal revenues problem.
The way out for Cursor could be a self-hosted much smaller model that focuses on code, and not the world. This could have inference costs lower than marginal revenues.
bogzz•2mo ago
xnx•2mo ago
esafak•2mo ago
vrighter•2mo ago
simianwords•2mo ago
> Anthropic is selling inference at a loss.
cost of models have gone down dramatically over time.
jgalt212•2mo ago
source: https://www.wheresyoured.at/
simianwords•2mo ago
jgalt212•2mo ago
simianwords•2mo ago
vrighter•2mo ago
Never mind the fact that the current model is alwayj outdated, and a new (bigger one) one is always being trained in parallel with the supposedly "cheaper" inference.