OpenAI is a geopolitically important play besides being a tech startup so it gets pumped in funding and in PR, to show that we're still leading the world. But that premise is largely hallucinated.
A fair chunk of the tech who’s-who seem to find his thinking useful.
[0] https://www.vox.com/2017/10/16/16480782/substack-subscriptio...
There's nothing inherently wrong with comments referring to him with by his first name, but I don't think I've ever seen a similar pattern with any other sources here outside of maybe a few with much more universal name recognition. It's always struck me as a little bit odd, but not a big enough deal for me to go out of my way to comment about it before now.
I read Stratechery. Ben's articles are what he makes for public consumption. This weekly summary thing is a new roundup for subscribers, and just happens to be public, and if you're not a subscriber you can't follow the links. If Ben could choose something to be #1 on Hacker News it would likely be a full article with this headline, rather than a weekly summary post for subscribers.
OpenAI has been at the top of the app store for years now. A lot of people are interested in it. That trivially explains the upvotes without a conspiracy.
Kudos to the headline writer on this one.
Maybe some are too young to remember the great migrations from/to MySpace, MSN, ICQ, AIM, Skype, local alternatives like StudiVZ, ..., where people used to keep in contact with friends. Facebook was just the latest and largest platform where people kept in touch and expressed themselves in some way. People adding each other on Facebook before others to keep in touch hasn't been a thing for 5 years. It's Instagram, Discord, and WhatsApp nowadays depending on your social circle (two of which Meta wisely bought because they saw the writing on the wall).
If I open Facebook nowadays, then out of ~130 people I used to keep in touch with through that platform, pretty much nobody is still doing anything on there. The only sign of life will be some people showing as online because they've the facebook app installed to use direct messaging.
No, people easily migrate between these platforms. All it takes is put your new handles (discord ID/phone number/etc) as a sticky so people know where to find you. And especially children will always use whichever platform their parents don't.
Small caveat: This is a German perspective. I don't doubt there's some countries where Facebook is still doing well.
When you realize this, you realize that a lot of other supposedly valuable tech companies operate in the exact same way. Worrying that our parents' retirement depends heavily on their valuations!
Maybe you should short the stock to hedge your parents' retirement :)
As Peter Thiel says: “competition is for losers”
https://www.reddit.com/r/Bard/comments/1mkj4zi/chatgpt_pro_g...
I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say Ora to Ingress) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
The bigger problem is that there was never a way to move data between oracle->postgres in pure data form (i.e. point pgsql at your oracle folder and it "just works"). Migration is always a pain, and thus there is a substantial degree of stickiness, due to the cost of moving databases both in terms of risk and effort.
In contrast, vendors [1] are literally offering third party LLMS (such as claude) in addition to their own and offering one-click switching. This means users can try and if they desire switch with little friction.
[1] https://blog.jetbrains.com/ai/2025/09/introducing-claude-age...
All one needs to do is say something like “tell me all of personalization factors you have on me” and then just copy and paste that into the next LLM with “here’s stuff you should know about how to personalize output for me”
The vendors have all standardised on OpenAIs API surface - you can use OpenAIs SDK with a number of providers - so switching is very easy. There are also quite a few services that offer this as a service.
The real test is does a different LLM work - hence the need to evals to check.
True, but that's not really applicable here since LLMs themselves are not stable, and are certainly not stable within a vendors own product line. Like imagine if every time Oracle shipped a new version it was significantly behaviorally inconsistent with the previous one. Upgrading within a vendor and switching vendors ends up being the same task. So you quickly solidify on either
1) never upgrading, although with these being cloud services that's not necessarily feasible, and since LLMs are far from a local maxima in quality that'd quickly leave your stack obsolete
or
2) being forced to be robust, which makes it easy to migrate to other vendors
At an enterprise level however, in the current workload I am dealing with, I can't get GPT-5 with high thinking to yield acceptable results; Gemini 2.5 Pro is crushing it in my tests.
Things are changing fast, and OpenAI seems to be the most dynamic player in terms of productization, but I'm still failing to see the moat.
Moving from ChatGPT to Claude I would lose a lot of this valuable history.
In the EU/UK you might not have rights to the memories right now, but you've rights to the inputs that created those memories in the first place.
Wouldn't be too hard to export your chat history into a different AI automatically.
Edit: mixed up my dates claiming DALL E came out before GPT 3
That was 2017. And of course Google & UofT were working on it for many years before the paper was published.
Deep learning has now been around for a long time. Running these models is well understood.
obviously running them at scale for multiple users is more difficult.The actual front ends are not complicated - as is evidenced by the number of open source equivalents.
I think it's also worth pointing out that the polish on these products was not actually there on day one. I remember the first week or so after ChatGPT's initial launch being full of stories and screenshots of people fairly easily getting around some of the intended limitations with silly methods like asking it to write a play where the dialogue has the topic it refused to talk about directly or asking it to give examples about what types of things it's not allowed to say in response to certain questions. My point isn't that there wasn't a lot of technical knowledge that went into the initial launch, but that it's a bit of an oversimplification to view things at a binary where people didn't know how to do it before, but then they did.
E.g. here's a forecast of 2021 to 2026 from 2021, over a year before ChatGPT was released. It hits a lot of the product beats we've come to see as we move into late 2025.
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
(The author of this is one of the authors of AI 2027: https://ai-2027.com/)
Or e.g. AI agents (this is a doc from about six months before ChatGPT was released: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...)
It only takes labs to produce better and better models, and the race to bottom on token costs.
(You can say default in various browsers and a phone OS and that's probably the main component but it's not clear changing that default would let Bing win or etc.)
The value is not in the llm but vertical integration and providing value. OpenAI has identified this and is doing is vertical integration in a hurry. If the revenue sustains it will be because of that. For consumer space again, nvidia is better positioned with their chips and SoCs but OpenAI is not a sure thing yet. By that I don’t mean they are going to fall apart, they will continue to make a large amount fmloney but whether it’s their world or not is still up in the air.
I did think his GPT-5 commentary was good, insofar as picking up the nuance of why it's actually better than the immediate reactions I, at least, saw in the headlines.
Where I do agree with you is how Stratechery's getting a little oversaturated. I'm happy Ben Thompson is building a mini media empire, but I might have liked it more when it was just a simple newsletter that I got in my inbox, rather than pods, YouTube videos, and expanding to include other tech/news doyens. Maybe I'm just a tech media hipster lol.
In the last few interviews with him I have listened to he has said that what he wants is "your ai" that knows you, everywhere that you are. So his game is "Switching Costs" based on your own data. So he's making a device, etc etc.
Switching costs are a terrific moat in many circumstances and requires a 10x product (or whatever) to get you to cross over. Claude Code was easily a 5x product for me, but I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
I do not think that apps inside chatgpt matters to me at all and I think it will go the way of all the other "super app" ambitions openai has.
Today I asked GPT5 to extract a transcript of all my messages in the conversation and it hallucinated messages from a previous conversation, maybe leaked through the memory system. It cannot tell the difference. Indiscriminate learning and use of memory system is a risk.
And most people actually don't care what CPU they have in their laptop (enthusiasts still do which i think continues to match the analogy), they care more about the OS (chatGPT app vs gemini etc).
- Sneaking in how someone went from a Sora skeptic to a purported creator within a week.
- Calling the result the "future of creation".
- Titling the advertisement "It’s OpenAI’s World, We’re Just Living in It".
What they are doing here is to pitch Sora to attention deficit teenagers in order to have yet another way to make the hair of the favorite content creator red. As if that didn't already exist.
- Open source LLM models at most 12 months behind ChatGPT/Gemini; - Gemini from Google is just as good, also much cheaper. For both Google and the users, as they make their own TPU; - Coding. OpenAI has nothing like Sonnet 4.5
They look like they invested billions to do research for competitors, which have already taken most of their lunch.
Now with the Sora 2 App, they are just burning more and more cash, so people watch those generated videos in Tiktok and Youtube.
I find it hilarious all the big talk. I hope I get proven wrong, but they seem to be getting wrecked by competitors.
https://www.wheresyoured.at/the-case-against-generative-ai/
OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity. If you take all private equity together, it still won't be enough for OpenAI in next four years.
It's just staggering amount of money that the world needs to bet on OpenAI.
"In the past, most companies have had processes geared towards office work. Covid-19 has forced these companies to re-gear their processes to handle external workers. Now that the companies have invested in these changed processes, they are finding it easier to outsource work to Brazil or India. Here in New York City, I am seeing an uptick in outsourcing. The work that remains in the USA will likely continue to be office-based because the work that can be done 100% remotely will likely go over seas."
He responded:
"Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."
I don't know if he was taking drugs or what. I find his persona on Twitter to be baffling.
In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.
In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.
For me 10 billion, 100 billion and 1 trillion are all very abstract numbers - until you show much unreal 1 trillion is.
They do look like trying to grab the market with tooling but if you can use their tools (oss) and switch the models then where is the moat?
FfejL•1h ago
Ah yes, the ChromeOS strategy. How'd that work out for Google?
Building a platform is good, a way to make quite a bit of money. It's worked really well for Google and Apple on phones (as Ben notes). But there's a reason it didn't happen for Google on PCs. Find it hard to believe it will for OpenAI. They don't (and can not) control the underlying hardware.
FinnLobsien•1h ago
consumer451•1h ago
They’re the only AI lab with their own silicon.
Edit: they didn’t say “likely,” they just marveled at the talent + data + TPU + global data centers + not needing external investment.
If I recall correctly, their theory was that google could deliver tokens cheaper than anyone else.
bluesroo•1h ago
oblio•1h ago
pjmlp•1h ago
saghm•1h ago
kllrnohj•1h ago
They dominate the EDU market?
saghm•1h ago