Can some business person give us a summary on PBCs vs. alternative registrations?
(IANAL but run a PBC that uses this charter[1] and have written about it here[2] as part of our biennial reporting process.)
[1] https://github.com/OpenCoreVentures/ocv-public-benefit-compa...
[2] https://goauthentik.io/blog/2024-09-25-our-biennial-pbc-repo...
Theory: It allows the CEO to make decisions motivated not just by maximizing shareholder value but by some other social good. Of course, very few PBC CEOs choose to do that.
Key Structure Changes:
- Abandoning the "capped profit" model (which limited investor returns) in favor of traditional equity structure - Converting for-profit LLC to Public Benefit Corporation (PBC) - Nonprofit remains in control but also becomes a major shareholder
Reading Between the Lines:
1. Power Play: The "nonprofit control" messaging appears to be damage control following previous governance crises. Heavy emphasis on regulator involvement (CA/DE AGs) suggests this was likely not entirely voluntary.
2. Capital Structure Reality: They need "hundreds of billions to trillions" for compute. The capped-profit structure was clearly limiting their ability to raise capital at scale. This move enables unlimited upside for investors while maintaining the PR benefit of nonprofit oversight.
3. Governance Complexity: The "nonprofit controls PBC but is also major shareholder" structure creates interesting conflicts. Who controls the nonprofit? Who appoints its board? These details are conspicuously absent.
4. Competition Positioning: Multiple references to "democratic AI" vs "authoritarian AI" and "many great AGI companies" signal they're positioning against perceived centralized control (likely aimed at competitors).
Red Flags:
- Vague details about actual control mechanisms - No specifics on nonprofit board composition or appointment process - Heavy reliance on buzzwords ("democratic AI") without concrete governance details - Unclear what specific powers the nonprofit retains besides shareholding
This reads like a classic Silicon Valley power consolidation dressed up in altruistic language - enabling massive capital raising while maintaining insider control through a nonprofit structure whose own governance remains opaque.
> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.
everyone will roll over if all large public companies roll over (and they will)
https://www.theregister.com/2025/02/03/us_senator_download_c...
One of them will eventually pass given that OpenAI is also pushing for protection:
Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.
MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.
Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.
If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.
For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.
But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.
Switching from ChatGPT to the many competitors is neither expensive nor painful.
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
This is another nail in the coffin
Which sounds pretty in-line with the SV culture of putting profit above all else.
If I were a person like several of the people working on AI right now (or really, just heading up tech companies), I could be the kind to look at a possible world-ending event happening in the next - eh, year, let's say - and just want to have a party at the end of the world.
Five years to ten years? Harder to predict.
The window there would at _least_ include the next 5 years, though obviously not ten.
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.
LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day
I don’t agree that this will affect ML progress much, since the general population isn’t contributing to core ML research.
Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.
That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.
There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).
The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.
I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.
So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.
Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.
It seems like nanotech is all around us now, but the term "nanotech" has been redefined to mean something different (larger scale, less amazing) from Drexler's molecular assemblers.
probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong
It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.
If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.
Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.
From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.
[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.
[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.
We have zero evidence for this. (Folks said the same shit in the 80s.)
Yeah; and:
We want to open source very capable models.
Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?
EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.
And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.
If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.
That's just how I feel.
There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
ChatGPT can solve problems that it was not explicitly trained to solve, across a vast number of problem domains.
https://arxiv.org/pdf/2311.02462
The paper is summarized here https://venturebeat.com/ai/here-is-how-far-we-are-to-achievi...
Think about it - the original definition of AGI was basically a machine that can do absolutely anything at a human level of intelligence or better.
That kind of technology wouldn't just appear instantly in a step change. There would be incremental progress. How do you describe the intermediate stages?
What about a machine that can do anything better than the 50th percentile of humans? That would be classified as "Competent AGI", but not "Expert AGI" or ASI.
> fancy search engine/auto completer
That's an extreme oversimplification. By the same reasoning, so is a person. They are just auto completing words when they speak. No that's not how deep learning systems work. It's not auto complete.
It's really not. The Space Shuttle isn't an emerging interstellar spacecraft, it's just a spacecraft. Throwing emerging in front of a qualifier to dilute it is just bullshit.
> By the same reasoning, so is a person. They are just auto completing words when they speak.
We have no evidence of this. There is a common trope across cultures and history of characterising human intelligence in terms of the era's cutting-edge technology. We did it with steam engines [1]. We did it with computers [2]. We're now doing it with large language models.
[1] http://metaphors.iath.virginia.edu/metaphors/24583
[2] https://www.frontiersin.org/journals/ecology-and-evolution/a...
The General Intelligence part of AGI refers to its ability to solve problems that it was not explicitly trained to solve, across many problem domains. We already have examples of the current systems doing exactly that - zero shot and few shot capabilities.
> We have no evidence of this.
That's my point. Humans are not "autocompleting words" when they speak.
No, it's bringing something out of scope into the definition. Gluten-free means free of gluten. Gluten-free bagel verus sliced bread is a refinement, because both started out under the definition. Glutinous bread, on the other hand, is not gluten free. As a result, "almost gluten free" is bullshit.
> That's my point. Humans are not "autocompleting words" when they speak
Humans are not. LLMs are. It turns out that's incredibly powerful! But it's also limiting in a way that's fundamentally important to the definition of AGI.
LLMs bring us closer to AGI in the way the inventions of writing, computers and the internet probably have. Calling LLMs "emerging AGI" pretends we are on a path to AGI in a way we have zero evidence for.
Bad analogy. That's a binary classification. AGI systems can have degrees of performance and capability.
> Humans are not. LLMs are.
My point is that if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans. It's such an oversimplification of the transformer / deep learning architecture that it becomes meaningless.
Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before
Literally everything that's been invented.
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
OpenAI admitting that they're not going to win?
Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs? Why do you have a for-profit LLC operating under a non-profit, or for that matter, a "Public Benefit Corporation" that has to answer to shareholders at all?
Related to that:
> or the needs for hundreds of billions of dollars of compute to train models and serve users.
How does that serve humanity? Redirecting billions of dollars to fancy autocomplete who's power demands strain already struggling electrical grids and offset the gains of green energy worldwide?
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
No, we thought your plagiarism machine was a disgusting abuse of the public square, and to be clear, this criticism would've been easily handled by simply requesting people opt-in to have their material used for AI training. But we all know why you didn't do that, don't we Sam.
> It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
Well so far, we've got vulnerable, lonely people being scammed on Facebook, we've got companies charging subscriptions for people to sext their chatbots, we've got various states using it to target their opposition for military intervention, and the White House may have used it to draft the dumbest basis for a trade war in human history. Oh and fake therapists too.
When's the good kick in?
> We believe this is the best path forward—AGI should enable all of humanity^1 to benefit each other.
^1 who subscribe to our services
Because they're concerned about AI use the same way Google is concerned about your private data.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.
I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.
The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
We should not be racing ahead because China is, but investing energy in alignment research and international agreements.
We do know that. By literally looking at China.
> The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
AGI aligned with whom?
I think the more relevant question is: Do you want to live in a Chinese dystopia, or a European one?
A non-AI dystopia is the least likely scenario.
With US already having lost ideologigal war with russia and China, Europe is very much next
No - I'm suggesting that China will reap the benefits of AI much more than Europe will, and they will eclipse Europe economically. Their dominance will follow, and they'll be able to dictate terms to other countries (just as the US is doing, and has been doing).
> And I don't think China will become a utopia with unregulated AI.
Did you miss all the places I used the word "dystopia"?
> My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have.
Comparing China when I was a kid, not that long ago, to what it is now: It is a dystopia, and that dystopia is responsible for much of the improvements they've made. Enjoying what they have doesn't mean it's not a dystopia. Most people don't understand how willing humans are to live in a dystopia if it improves their condition significantly (not worrying too much about food, shelter, etc).
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I don't know. I'm just asking questions.
The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.
Omnipotent deities can never be held responsible for famine and natural disasters ("God has a plan for us all"). AI currently has the same get-out-of-jail free card where mistakes that no literate human would ever make are handwaved away as "hallucinations" that can be exorcised with a more sophisticated training model ("prayers").
* The nonprofit is staying the same, and will continue to control the for-profit entity OpenAI created to raise capital
* The for-profit is changing from a capped-profit LLC to a PBC like Anthropic and Xai
* These changes have been at least tacitly agreed to by the attorneys general of California and Delaware
* The non-profit won’t be the largest shareholder in the PBC (likely Microsoft) but will retain control (super voting shares?)
* OpenAI thinks there will be multiple labs that achieve AGI, although possibly on different timelines
It's just that this bait has a shelf life and it looks like it's going to expire soon.
This indicates that they didn't actually want the nonprofit to retain control and they're only doing it because they were forced to by threats of legal action.
The newer version included sponsored products in its response. I thought that was quite effed up.
No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".
Let's see how this plays out. PBC effectively means nothing - just take a look at Xai and its purchase of Twitter. I would love to listen reasoning explaining why this ~33 billion USD move is benefiting public.
Right; so, "Worker Unions" work.
edit: to be clear, it's not a bad thing - we should want companies that create consumer surplus. But that's the default state of companies in a healthy market.
This is true for literally any transaction. Actually, it's true for any rational action. If you're being tortured, and you decide it's not worth it to keep your secrets hidden any longer, you get more than you give up when you stop being tortured.
Not being snarky here, like what is the purported thesis behind them?
There was never a coherent explanation of its firing the CEO.
But they could have stuck with that decision if they believed in it.
Then things went unexpectedly well, people were valuing them at billions of dollars, and they suddenly decided they weren't open any more. Suddenly they were all about Altman's Interests Safety (AI Safety for short).
The board tried to fulfil its obligation to get the nonprofit to do the things in its charter, and they were unsuccessful.
Being rich results in a kind of limitation of scope for ambition. To the sufferer, a person who has everything they could want, there is no other objective worth having. They become eccentric and they pursue more money.
We should have enrichment facilities for these people where they play incremental games and don’t ruin the world like the paperclip maximizers they are.
The dude announces new initiatives from the White House, regularly briefs Senators and senior DoD leaders, and is the top get for interviews around the world for AI topics.
There’s a lot more to be ambitious about than just money.
Maybe he wants to use the money in some nebulous future way, subjugating all people in a way that deals with his childhood trauma or whatever. That’s also something rich people do when they need a hobby aside from gathering more money. It’s not their main goal, except when they run into setbacks.
People are not complicated when they are money hoarders. They might have had hidden depths once, but they are thin furrows in the ground next to the giant piles of money that define them now.
Google/Anthropic are catching up, or already surpassed.
--Gordon Gekko
This is already impossibly hard. Approximately zero people commenting would be able to win this battle in Sam’s shoes. What would they need to do to begin to have a chance? Rather than make all the obvious comments “bad evil man wants to get rich”, think what it would take to achieve the mission. What would you need to do in his shoes, aside from just give up and close up shop? Probably this, at the very least.
Edit: I don’t know the guy and many near YC do. So I accept there may be a lens I don’t have. But I’d rather discuss the problem, not the person.
If someone reminds you of Thiel, you're going to cut a cheque.
(1) be transparent about exactly which data was collected for the model
(2) release all the source code
If you want to benefit humanity, then put it under a strong copyleft license with no CLA. Simple.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
altman building a centralised authority of who will be classed as "human" is about as authoritarian as you could get
I doubt Worldcoin will actually manage to corner the market. But the point is, if it did, bad things would happen. Though, that’s probably true of most products.
>Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, the right to private property, and equality before the law. Liberals espouse various and often mutually conflicting views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history.
You mean, AGI will benefit all of humanity like War on Terror spread democracy?
Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.
Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.
If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
free (foss) -> non-profit -> capped-profit -> public benefits corporation -> (you guessed it)
1) You're successful.
2) You mess up checks-and-balances at the beginning.
OpenAI did both.
Personally, I think at some point, the AGs ought to take over and push it back into a non-profit format. OAI undermines the concept of a non-profit.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet.
Everybody else like you describe is trying to add some AI crap behind a button on a congested UI.
B2B market will stay open but OpenAI has certainly not peaked yet.
What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.
So again, I ask, what makes it sticky?
At best they have a bit of cheap tribalism that might prevent some incurious people who don't care much about using the best tools noticing that they aren't.
So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
I have my own answers for these, and I'm sure all the smart people figuring out strategy at Open AI have thought about similar things.
It's not clear if Open AI will be able to overcome this commodification issue (personally, I think they won't), but I don't think it's impossible, and there is prior art for at least some of the pages in this playbook.
Google is doing well for the moment, but OpenAI just closed a $40 billion round. Neither will be able to rest for a while.
Maybe the big amount of money they've given to Apple which is their direct competitor in the mobile space. Also good amount of money given to Firefox, which is their direct competitor in the browser space, alongside side Safari from Apple.
Most people don't care about the search engine. The default is what they will used unless said default is bad.
They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.
I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.
...Until their employer forces them to use Microsoft Copilot, or Google Gemini, or whatever, because that's what they pay for and what integrates into their enterprise stack. And the new employee shrugs and accepts it.
...yes. Office is the market leader. Slack has between a fifth and a fourth of the market. Coca-Cola's products have like 70% market share in the American carbonated soft-drink market [1].
[1] https://www.investopedia.com/ask/answers/060415/how-much-glo...
If you look at Gemini, I know people using it daily.
And nobody's saying OpenAI will go bankrupt, they'll certainly continue to be a huge player in this space. But their astronomical valuation was based on the initial impression that they were the only game in town, and it will come down now that that's no longer true. Hence why Altman wants to cash out ASAP.
My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.
Facebook wasn't some startup when Google+ entered the scene; they were already cash flow positive, and had roughly 30% ads market share.
OpenAI is still operating at a loss despite having 50+% of the chatbot "market". There is no easy path to victory for them here.
This moat is non-existent when it comes to Open AI.
All dissidents went into Little Wadyia.
When the Dictator himself visited it, he started to fake his name by copying the signs and names he saw on the walls. Everyone knew what he was.
Internet social networks are like that.
Now, this moat thing. That's hilarious.
For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.
Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters
OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.
In this niche you can be irrellevant in months when your models drop behind.
The news that they did that would make them lose most of their revenue pretty fast.
OpenAI has claimed this. But Altman is a pathological liar. There are lots of ways of disguising operating costs as capital costs or R&D.
OpenAI models are already of the most expensive, they don’t have a lot of levers to pull.
I feel like people overuse this criticism. That's not the only way that companies with a lot of revenue lose money. And this isn't at all what OpenAI is doing, at least from their customers' perspective. It's not like customers are subscribing to ChatGPT simply because it gives them something they were going to buy anyway for cheaper.
It’s ok to not buy into the vision or think it’s impossible. But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
This comparison is always used when people are trying to hype something. For every "iPhone" there are thousands of failures
> But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
You're acting as-if OpenAI is still the only player in this space. OpenAI has plenty of competitors who can deliver similar models for cheaper. Gemini 2.5 is an excellent and affordable model and Google has a substantially better capacity to scale because of a multi-year investment in its TPUs.
Whatever first mover advantage OpenAI had has been quickly eliminated, they've lost a lot of their talent, and the chief hypothesis they used to attract the capital they've raised so far is utterly wrong. VCs would be mad to be continuing to pump money into OpenAI just to extend their runway -- at 5 Bln losses per year they need to actually consider cost, especially when their frontier releases are only marginal improvements over competitors.
... this is a bubble despite the promise of the technology and anyone paying attention can see it. For all of the dumb money employed in this space to make it out alive, we'll have to at least see a fairly strong form of AGI developed, and by that point the tech will be threatening the general economic stability of the US consumer.
Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?
When the Internet was developed they didn't imagine the world wide Web.
When cars started to get popular people still thought there would be those who are going to stick with horses.
I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.
Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.
The world is changing and that is scary.
This makes me want to invest in malpractice lawyers, not OpenAI
I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.
The fact that people know Coca Cola doesnt mean they drink it.
That name recognition made Coca Cola into a very successful global corporation.
The names don't even matter when everything is baked in.
Slack? Zoom? Teams?
I'm sure you'd get a somewhat uniform distribution.
Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.
Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.
1: https://www.techpolicy.press/transcript-senate-judiciary-sub...
I guess Gemini just refused because of a poor filter for sensitive content. But still, it was annoying.
The only thing OpenAI has right now is the ChatGPT name, which has become THE word for modern LLMs among lay people.
Anecdotally, I've switched to Gemini as my daily driver for complex coding tasks. I prefer Claude's cleaner code, but it is less capable at difficult problems, and Anthropic's servers are unreliable.
The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.
ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
Investment isn't relevant to measuring a technology's progress. I agree that returns are diminishing, both technologically and financially. But I'd be curious for a source suggesting linear progress.
Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.
Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.
> taken it from a toy to genuinely insanely useful.
Really?
Market share of OpenAI is like 90%+.
Source? I've seen 10 to 20% [1][2].
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
I've been feeling for some time now that we're sort of in the Vietnam War era of the tech industry.
I feel a strong urge to have more "ok, so where do we go from here?" and "what does a tech industry that promotes net good actually look like?" internal discourse in the community of practice, and some sort of ethical social contract for software engineering.
The open source movement has been fabulous and sometimes adjacent to or one aspect of these concerns, but really we need a movement for socially conscious and responsible software.
We need a tech counter-culture. We had one once, but now we need one.
But there are still plenty of mission-focused technology non-profits out there. Many of which have lasted decades. For example: Linux Foundation, Internet Archive, Mozilla, Wikimedia, Free Software Foundation, and Python Software Foundation.
Don't get me wrong, I'm also disappointed in the direction and actions of big tech, but I don't think it's fair to dismiss the non-profit foundations. They aren't worth a trillion dollars, however they are still doing good and important work.
> Sam’s Letter to Employees.
> OpenAI is not a normal company and never will be.
Where did I hear something like that before...
> Founders' IPO Letter
> Google is not a conventional company. We do not intend to become one.
I wonder if it's intentional or perhaps some AI-assisted regurgitation prompted by "write me a successful letter to introduce a new corporate structure of a tech company".
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
Whether they are a net positive or a net negative is arguable. If it's a net negative, then unleashing them to the masses was maybe the danger itself.
There is a lot to criticize about OpenAI and Sama, but this isn't it.
So were do I vote? How do I became a candidate to be a representative or a delegate of voters? I assume every single human is eligible for both, as OpenAI serves the humanity?
Edit: also apparently known as contronym.
It generally means broadening access to something. Finance loves democratising access to stupid things, for example.
> word is a homonym of its antonym?
Inflammable in common use.
Musk claimed Fraud, but never asked for his money back in the brief. Could it be his intentions were to limit OpenAI to donations thereby sucking the oxygen out of the venture capital space to fund Xai's Grok?
Musk claimed he donated $100mil, later in a CNBC interview, he said $50-mil. TechCrunch suggests it was way less.
Speakingof humanitarian, how about this 600lbs Oxymoron in the room: A Boston University mathematician has now tracked an estimated 10,000 deaths linked to the Musk's destruction of USAID programs, many of which provided basic health services to vulnerable populations. He may have a death count on his reume in the coming year.
Non profits has regulation than publicly traded companies. Each quarterly filings is like a colonoscopy with Sorbonne Oxley rules etc. Non profits just file a tax statement. Did you know the Chirch of Scientology is a non-profit.
Why does it matter? If someone hits my cat with their car, my intentions in suing them are absolutely not benevolent--they're vengeful. That doesn't corrupt or render invalid the cause of action.
He's a symptom of a problem. He's not actually the problem.
We know it's a sword. And there's war, yadda yadda. However, let's do the cultivating thing instead.
What other AI players we need to convince?
More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear AI leadership probably won't be dominated by one company, that progress of "frontier models" is stalling while costs are spiraling and 'foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.
ru552•3h ago
anxman•3h ago
babelfish•1h ago