Can some business person give us a summary on PBCs vs. alternative registrations?
(IANAL but run a PBC that uses this charter[1] and have written about it here[2] as part of our biennial reporting process.)
[1] https://github.com/OpenCoreVentures/ocv-public-benefit-compa...
[2] https://goauthentik.io/blog/2024-09-25-our-biennial-pbc-repo...
Theory: It allows the CEO to make decisions motivated not just by maximizing shareholder value but by some other social good. Of course, very few PBC CEOs choose to do that.
Key Structure Changes:
- Abandoning the "capped profit" model (which limited investor returns) in favor of traditional equity structure - Converting for-profit LLC to Public Benefit Corporation (PBC) - Nonprofit remains in control but also becomes a major shareholder
Reading Between the Lines:
1. Power Play: The "nonprofit control" messaging appears to be damage control following previous governance crises. Heavy emphasis on regulator involvement (CA/DE AGs) suggests this was likely not entirely voluntary.
2. Capital Structure Reality: They need "hundreds of billions to trillions" for compute. The capped-profit structure was clearly limiting their ability to raise capital at scale. This move enables unlimited upside for investors while maintaining the PR benefit of nonprofit oversight.
3. Governance Complexity: The "nonprofit controls PBC but is also major shareholder" structure creates interesting conflicts. Who controls the nonprofit? Who appoints its board? These details are conspicuously absent.
4. Competition Positioning: Multiple references to "democratic AI" vs "authoritarian AI" and "many great AGI companies" signal they're positioning against perceived centralized control (likely aimed at competitors).
Red Flags:
- Vague details about actual control mechanisms - No specifics on nonprofit board composition or appointment process - Heavy reliance on buzzwords ("democratic AI") without concrete governance details - Unclear what specific powers the nonprofit retains besides shareholding
This reads like a classic Silicon Valley power consolidation dressed up in altruistic language - enabling massive capital raising while maintaining insider control through a nonprofit structure whose own governance remains opaque.
> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.
everyone will roll over if all large public companies roll over (and they will)
https://www.theregister.com/2025/02/03/us_senator_download_c...
One of them will eventually pass given that OpenAI is also pushing for protection:
Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.
MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.
Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.
If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.
For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.
But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.
Switching from ChatGPT to the many competitors is neither expensive nor painful.
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
This is another nail in the coffin
Which sounds pretty in-line with the SV culture of putting profit above all else.
If I were a person like several of the people working on AI right now (or really, just heading up tech companies), I could be the kind to look at a possible world-ending event happening in the next - eh, year, let's say - and just want to have a party at the end of the world.
Five years to ten years? Harder to predict.
The window there would at _least_ include the next 5 years, though obviously not ten.
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.
LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day
I don’t agree that this will affect ML progress much, since the general population isn’t contributing to core ML research.
Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.
That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.
There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).
The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.
I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.
So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.
Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.
It seems like nanotech is all around us now, but the term "nanotech" has been redefined to mean something different (larger scale, less amazing) from Drexler's molecular assemblers.
The problem is that the distance between a nano thin film or an interesting but ultimately rigid nano scale transistor and a programmable nano level sized robot is enormous, despite similar sizes. Same like the distance between an autocomplete heavily relying on the preexisting external validators (compilers, linters, static code analyzers etc.) and a real AI capable of thinking is equally enormous.
I thought this was a "we know we can't" thing rather than a "not with current technology" thing?
The idea of scaling up LLMs and hoping is .. pretty silly.
probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong
It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.
If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.
Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.
From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.
[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.
[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.
We have zero evidence for this. (Folks said the same shit in the 80s.)
Yeah; and:
We want to open source very capable models.
Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?
EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.
And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.
If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.
That's just how I feel.
There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
ChatGPT can solve problems that it was not explicitly trained to solve, across a vast number of problem domains.
https://arxiv.org/pdf/2311.02462
The paper is summarized here https://venturebeat.com/ai/here-is-how-far-we-are-to-achievi...
Think about it - the original definition of AGI was basically a machine that can do absolutely anything at a human level of intelligence or better.
That kind of technology wouldn't just appear instantly in a step change. There would be incremental progress. How do you describe the intermediate stages?
What about a machine that can do anything better than the 50th percentile of humans? That would be classified as "Competent AGI", but not "Expert AGI" or ASI.
> fancy search engine/auto completer
That's an extreme oversimplification. By the same reasoning, so is a person. They are just auto completing words when they speak. No that's not how deep learning systems work. It's not auto complete.
It's really not. The Space Shuttle isn't an emerging interstellar spacecraft, it's just a spacecraft. Throwing emerging in front of a qualifier to dilute it is just bullshit.
> By the same reasoning, so is a person. They are just auto completing words when they speak.
We have no evidence of this. There is a common trope across cultures and history of characterising human intelligence in terms of the era's cutting-edge technology. We did it with steam engines [1]. We did it with computers [2]. We're now doing it with large language models.
[1] http://metaphors.iath.virginia.edu/metaphors/24583
[2] https://www.frontiersin.org/journals/ecology-and-evolution/a...
The General Intelligence part of AGI refers to its ability to solve problems that it was not explicitly trained to solve, across many problem domains. We already have examples of the current systems doing exactly that - zero shot and few shot capabilities.
> We have no evidence of this.
That's my point. Humans are not "autocompleting words" when they speak.
No, it's bringing something out of scope into the definition. Gluten-free means free of gluten. Gluten-free bagel verus sliced bread is a refinement--both started out under the definition. Glutinous bread, on the other hand, is not gluten free. As a result, "almost gluten free" is bullshit.
> That's my point. Humans are not "autocompleting words" when they speak
Humans are not. LLMs are. It turns out that's incredibly powerful! But it's also limiting in a way that's fundamentally important to the definition of AGI.
LLMs bring us closer to AGI in the way the inventions of writing, computers and the internet probably have. Calling LLMs "emerging AGI" pretends we are on a path to AGI in a way we have zero evidence for.
Bad analogy. That's a binary classification. AGI systems can have degrees of performance and capability.
> Humans are not. LLMs are.
My point is that if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans. It's such an oversimplification of the transformer / deep learning architecture that it becomes meaningless.
The "g" in AGI requires the AI be able to perform "the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans" [1]. Full and not full are binary.
> if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans
No, you can't, unless you're pre-supposing that LLMs work like human minds. Calling LLMs "emerging AGI" pre-supposes that LLMs are the path to AGI. We simply have no evidence for that, no matter how much OpenAI and Google would like to pretend it's true.
[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...
> No, you can't, unless you're pre-supposing that LLMs work like human minds.
You are missing the point. If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights. If you completely ignore all of that, then by the same reasoning (completely ignoring the complexity of the human brain) we can just say that people are auto-completing words when they speak.
Sure, Google wants to redefine AGI so it looks like things that aren’t AGI can be branded as such. That definition is, correctly in my opinion, being called out as bullshit.
> obviously there will be stages in between
We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.
> If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights
Fair enough, granted.
Edit: because if "AGI" doesn't mean that... then what means that and only that!?
"Agentic AI" means that.
Well, to some people, anyway. And even then, people are already arguing about what counts as agency.
That's the trouble with new tech, we have to invent words for new stuff that was previously fiction.
I wonder, did people argue if "horseless carriages" were really carriages? And "aeroplane" how many argued that "plane" didn't suit either the Latin or Greek etymology for various reasons?
We never did rename "atoms" after we split them…
And then there's plain drift: Traditional UK Christmas food is the "mince pie", named for the filling, mincemeat. They're usually vegetarian and sometimes even vegan.
It's kind of a simple enough concept... it's really just something that functions on par with how we do. If you've built that, you've built AGI. If you haven't built that, you've built a very capable system, but not AGI.
Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before
Literally everything that's been invented.
The turing test was succesfull. Pre chatGPT, I would not have believed, that will happen so soon.
LLMs ain't AGI, sure. But they might be an essential part and the missing parts maybe already found, just not put together.
And work there will be always plenty. Distributing ressources might require new ways, though.
The very people whose theories about language are now being experimentally verified by LLMs, like Chomsky, have also been discrediting the Turing test as pseudoscientific nonsense since early 1990s.
It's one of those things like the Kardashev scale, or Level 5 autonomous driving, that's extremely easy to define and sounds very cool and scientific, but actually turns out to have no practical impact on anything whatsoever.
Bots, that are now allmost indistinguishable from humans, won't have a practical impact? I am sceptical. And not just because of scammers.
The two concepts have historically been inexorably linked in sci-fi, which will likely make the first AGI harder to recognize as AGI if it lacks consciousness, but I'd argue that simple "unconscious AGI" would be the superior technology for current and foreseeable needs. Unconscious AGI can be employed purely as a tool for massive collective human wealth generation; conscious AGI couldn't be used that way without opening a massive ethical can of worms, and on top of that its existence would represent an inherent existential threat.
Conscious AGI could one day be worthwhile as something we give birth to for its own sake, as a spiritual child of humanity that we send off to colonize distant or environmentally hostile planets in our stead, but isn't something I think we'd be prepared to deal with properly in a pre-post-scarcity society.
It isn't inconceivable that current generative AI capabilities might eventually evolve to such a level that they meet a practical bar to be considered unconscious AGI, even if they aren't there yet. For all the flak this tech catches, it's easy to forget that capabilities which we currently consider mundane were science fiction only 2.5 years ago (as far as most of the population was concerned). Maybe SOTA LLMs fit some reasonable definition of "emerging AGI", or maybe they don't, but we've already shifted the goalposts in one direction given how quickly the Turing test became obsolete.
Personally, I think current genAI is probably a fair distance further from meeting a useful definition of AGI than those with a vested interest in it would admit, but also much closer than those with pessimistic views of the consequences of true AGI tech want to believe.
It isn't close at all.
"AGI" was already a goalpost move from "AI" which has been gobbled up by the marketing machine.
Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...
Can ChatGPT drive a car? No, we have specialized models for driving vs generating text vs image vs video etc etc. Maybe ChatGPT could pass a high school chemistry test but it certainly couldn't complete the lab exercises. What we've built is a really cool "Algorithm for indexing generalized data", so you can train that Driving model very similarly to how you train the Text model without needing to understand the underlying data that well.
The author asserts that because ChatGPT can generate text about so many topics that it's general, but it's really only doing 1 thing and that's not very general.
I think we need to separate the thinking part of intelligence from tool usage. Not everyone can use every tool at a high level of expertise.
Likewise for "intelligent", and even "artificial".
So no, ChatGPT can't drive a car*. But it knows more about car repairs, defensive driving, global road features (geoguesser), road signs in every language, and how to design safe roads, than I'm ever likely to.
* It can also run python scripts with machine vision stuff, but sadly that's still not sufficient to drive a car… well, to drive one safety, anyway.
How about we have ChatGPT start with a simple task like reliably generating JSON schema when asked to.
Hint: it will fail.
This doesn’t imply that it’s ideal for driving cars, but to say that it’s not capable of driving general intelligence is incorrect in my view.
Last time I checked, in an Anthropic paper, they asked the model to count something. They examined the logits and a graph showing how it arrived at the answer. Then they asked the model to explain its reasoning, and it gave a completely different explanation, because that was the most statistically probable response to the question. Does that seem like AGI to you?
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
This has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling.
To be precise, it assumes a low variability in cycle time and improvement per cycle. If everyone is subjected to the same limits, the first-mover advantage remains insurmountable. I’d also argue that whether there is a ceiling matters less than how high it is. If the first AGI won’t hit a ceiling for decades, it will have decades of fratricidal supremacy.
It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.
OpenAI admitting that they're not going to win?
Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs? Why do you have a for-profit LLC operating under a non-profit, or for that matter, a "Public Benefit Corporation" that has to answer to shareholders at all?
Related to that:
> or the needs for hundreds of billions of dollars of compute to train models and serve users.
How does that serve humanity? Redirecting billions of dollars to fancy autocomplete who's power demands strain already struggling electrical grids and offset the gains of green energy worldwide?
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
No, we thought your plagiarism machine was a disgusting abuse of the public square, and to be clear, this criticism would've been easily handled by simply requesting people opt-in to have their material used for AI training. But we all know why you didn't do that, don't we Sam.
> It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
Well so far, we've got vulnerable, lonely people being scammed on Facebook, we've got companies charging subscriptions for people to sext their chatbots, we've got various states using it to target their opposition for military intervention, and the White House may have used it to draft the dumbest basis for a trade war in human history. Oh and fake therapists too.
When's the good kick in?
> We believe this is the best path forward—AGI should enable all of humanity^1 to benefit each other.
^1 who subscribe to our services
Because they're concerned about AI use the same way Google is concerned about your private data.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.
I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.
The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
We should not be racing ahead because China is, but investing energy in alignment research and international agreements.
We do know that. By literally looking at China.
> The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
AGI aligned with whom?
I think the more relevant question is: Do you want to live in a Chinese dystopia, or a European one?
A non-AI dystopia is the least likely scenario.
With US already having lost ideologigal war with russia and China, Europe is very much next
No - I'm suggesting that China will reap the benefits of AI much more than Europe will, and they will eclipse Europe economically. Their dominance will follow, and they'll be able to dictate terms to other countries (just as the US is doing, and has been doing).
> And I don't think China will become a utopia with unregulated AI.
Did you miss all the places I used the word "dystopia"?
> My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have.
Comparing China when I was a kid, not that long ago, to what it is now: It is a dystopia, and that dystopia is responsible for much of the improvements they've made. Enjoying what they have doesn't mean it's not a dystopia. Most people don't understand how willing humans are to live in a dystopia if it improves their condition significantly (not worrying too much about food, shelter, etc).
No, just control. America exerts influence and control over Europe without having had to attack it in generations.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I don't know. I'm just asking questions.
Mostly OpenAI and DeepMind and it stunk of 'pulling up the drawbridge behind them' and pivoting from actual harm to theoretical harm.
For a crowd supposedly entrenched in startups, it's amazing everyone here is so slow to recognise it's all funding pitches and contract bidding.
The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.
Omnipotent deities can never be held responsible for famine and natural disasters ("God has a plan for us all"). AI currently has the same get-out-of-jail free card where mistakes that no literate human would ever make are handwaved away as "hallucinations" that can be exorcised with a more sophisticated training model ("prayers").
* The nonprofit is staying the same, and will continue to control the for-profit entity OpenAI created to raise capital
* The for-profit is changing from a capped-profit LLC to a PBC like Anthropic and Xai
* These changes have been at least tacitly agreed to by the attorneys general of California and Delaware
* The non-profit won’t be the largest shareholder in the PBC (likely Microsoft) but will retain control (super voting shares?)
* OpenAI thinks there will be multiple labs that achieve AGI, although possibly on different timelines
It's just that this bait has a shelf life and it looks like it's going to expire soon.
This indicates that they didn't actually want the nonprofit to retain control and they're only doing it because they were forced to by threats of legal action.
The newer version included sponsored products in its response. I thought that was quite effed up.
No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".
Let's see how this plays out. PBC effectively means nothing - just take a look at Xai and its purchase of Twitter. I would love to listen reasoning explaining why this ~33 billion USD move is benefiting public.
Right; so, "Worker Unions" work.
edit: to be clear, it's not a bad thing - we should want companies that create consumer surplus. But that's the default state of companies in a healthy market.
This is true for literally any transaction. Actually, it's true for any rational action. If you're being tortured, and you decide it's not worth it to keep your secrets hidden any longer, you get more than you give up when you stop being tortured.
Not being snarky here, like what is the purported thesis behind them?
Some founders truly believe in structuring the company for the benefit of the public, but Altman has already shown he's not one of them.
There was never a coherent explanation of its firing the CEO.
But they could have stuck with that decision if they believed in it.
Then things went unexpectedly well, people were valuing them at billions of dollars, and they suddenly decided they weren't open any more. Suddenly they were all about Altman's Interests Safety (AI Safety for short).
The board tried to fulfil its obligation to get the nonprofit to do the things in its charter, and they were unsuccessful.
But they found themselves alone in that it turns out the employees (who were employed by the for-profit company) and investors (MSFT in particular) didn't care about the mission and wanted to follow the money instead.
So the board had no choice but to capitulate and leave.
Being rich results in a kind of limitation of scope for ambition. To the sufferer, a person who has everything they could want, there is no other objective worth having. They become eccentric and they pursue more money.
We should have enrichment facilities for these people where they play incremental games and don’t ruin the world like the paperclip maximizers they are.
The dude announces new initiatives from the White House, regularly briefs Senators and senior DoD leaders, and is the top get for interviews around the world for AI topics.
There’s a lot more to be ambitious about than just money.
Maybe he wants to use the money in some nebulous future way, subjugating all people in a way that deals with his childhood trauma or whatever. That’s also something rich people do when they need a hobby aside from gathering more money. It’s not their main goal, except when they run into setbacks.
People are not complicated when they are money hoarders. They might have had hidden depths once, but they are thin furrows in the ground next to the giant piles of money that define them now.
Google/Anthropic are catching up, or already surpassed.
--Gordon Gekko
St. Altman plans to create a corporate god for us dumb schmucks, and he will be it's prophet.
This is already impossibly hard. Approximately zero people commenting would be able to win this battle in Sam’s shoes. What would they need to do to begin to have a chance? Rather than make all the obvious comments “bad evil man wants to get rich”, think what it would take to achieve the mission. What would you need to do in his shoes, aside from just give up and close up shop? Probably this, at the very least.
Edit: I don’t know the guy and many near YC do. So I accept there may be a lens I don’t have. But I’d rather discuss the problem, not the person.
(1) be transparent about exactly which data was collected for the model
(2) release all the source code
If you want to benefit humanity, then put it under a strong copyleft license with no CLA. Simple.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
altman building a centralised authority of who will be classed as "human" is about as authoritarian as you could get
I doubt Worldcoin will actually manage to corner the market. But the point is, if it did, bad things would happen. Though, that’s probably true of most products.
>Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, the right to private property, and equality before the law. Liberals espouse various and often mutually conflicting views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history.
You mean, AGI will benefit all of humanity like War on Terror spread democracy?
Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.
Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.
If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.
Condoning "honest liars" enables a whole other level of open and unrestricted criminality.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
It absolutely did. Steve Wozniak was real. Silicon Valley wasn't always a hive of liars and sycophants.
It was sparked by going to a video conference "Hyperlocal Heroes: Building Community Knowledge in the Digital Age" hosted by New_ Public: https://newpublic.org/ "Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."
A not-insignificant amount of time in that one-hour teleconference was spent related to funding models for local social media and local reporting.
Afterwards, I got to thinking. The USA spent literally trillions of dollars on the (so-many-problematical-things-about-it-I-better-stop-now) Iraq war. https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War "According to a Congressional Budget Office (CBO) report published in October 2007, the US wars in Iraq and Afghanistan could cost taxpayers a total of $2.4 trillion by 2017 including interest."
Or, from a different direction, the USA spends about US$200 billion per year on mostly-billboard-free roads: https://www.urban.org/policy-centers/cross-center-initiative... "In 2021, state and local governments provided three-quarters of highway and road funding ($154 billion) and federal transfers accounted for $52 billion (25 percent)."
That's about US$700 per person per year on US roads.
So, clearly huge amounts of money are available in the USA if enough people think something is important. Imagine if a similar amount of money went to funding exactly what you outlined -- a free web presence for distributed social media -- with an infrastructure funded by tax dollars instead of advertisements. Isn't a healthy social media system essential to 21st century online democracy with public town squares?
And frankly such a distributed social media ecosystem in the USA might be possible for at most a tenth of what roads cost, like perhaps US$70 per person per year (or US$20 billion per year)?
Yes, there are all sorts of privacy and free speech issues to work through -- but it is not like we don't have those all now with the advertiser-funded social media systems we have. So, it is not clear to me that such a system would be immensely worse than what we have.
But what do I know? :-) Here was a previous big government suggestion be me from 2010 -- also mostly ignored (until now 15 years later the USA is in political crisis over supply chain dependency and still isn't doing anything very related to it yet): "Build 21000 flexible fabrication facilities across the USA" https://web.archive.org/web/20100708160738/http://pcast.idea... "Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
free (foss) -> non-profit -> capped-profit -> public benefits corporation -> (you guessed it)
1) You're successful.
2) You mess up checks-and-balances at the beginning.
OpenAI did both.
Personally, I think at some point, the AGs ought to take over and push it back into a non-profit format. OAI undermines the concept of a non-profit.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet.
Everybody else like you describe is trying to add some AI crap behind a button on a congested UI.
B2B market will stay open but OpenAI has certainly not peaked yet.
What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.
So again, I ask, what makes it sticky?
At best they have a bit of cheap tribalism that might prevent some incurious people who don't care much about using the best tools noticing that they aren't.
So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
I have my own answers for these, and I'm sure all the smart people figuring out strategy at Open AI have thought about similar things.
It's not clear if Open AI will be able to overcome this commodification issue (personally, I think they won't), but I don't think it's impossible, and there is prior art for at least some of the pages in this playbook.
Google is doing well for the moment, but OpenAI just closed a $40 billion round. Neither will be able to rest for a while.
Maybe the big amount of money they've given to Apple which is their direct competitor in the mobile space. Also good amount of money given to Firefox, which is their direct competitor in the browser space, alongside side Safari from Apple.
Most people don't care about the search engine. The default is what they will used unless said default is bad.
So then apply that to Open AI. What are the distribution channels? Should they be paying Cursor to make them the default model? Or who else? Would that work? If not, why not? What's different?
My intuition is that this wouldn't work for them. I think if this "pay to be default" strategy works for someone, it will be one of their deeper pocketed rivals.
But I also don't think this was the only reason Google won search. In my memory, those deals to pay to be the default came fairly long after they had successfully built the brand image as the best search engine. That's how they had the cash to afford to pay for this.
A couple years ago, I thought it seemed likely that Open AI would win the market in that way, by being known as the clear best model. But that seems pretty unclear now! There are a few different models that are pretty similarly capable at this point.
Essentially, I think the reason Google was able to win search whereas the prospects look less obvious for Open AI is that they just have stronger competition!
To me, it just highlights the extent to which the big players at the time of Google's rise - Microsoft, Yahoo, ... Oracle maybe? - really dropped the ball on putting up strong competition. (Or conversely, Google was just further ahead of its time.)
They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.
I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.
...Until their employer forces them to use Microsoft Copilot, or Google Gemini, or whatever, because that's what they pay for and what integrates into their enterprise stack. And the new employee shrugs and accepts it.
...yes. Office is the market leader. Slack has between a fifth and a fourth of the market. Coca-Cola's products have like 70% market share in the American carbonated soft-drink market [1].
[1] https://www.investopedia.com/ask/answers/060415/how-much-glo...
If you look at Gemini, I know people using it daily.
And nobody's saying OpenAI will go bankrupt, they'll certainly continue to be a huge player in this space. But their astronomical valuation was based on the initial impression that they were the only game in town, and it will come down now that that's no longer true. Hence why Altman wants to cash out ASAP.
My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.
Facebook wasn't some startup when Google+ entered the scene; they were already cash flow positive, and had roughly 30% ads market share.
OpenAI is still operating at a loss despite having 50+% of the chatbot "market". There is no easy path to victory for them here.
This moat is non-existent when it comes to Open AI.
All dissidents went into Little Wadyia.
When the Dictator himself visited it, he started to fake his name by copying the signs and names he saw on the walls. Everyone knew what he was.
Internet social networks are like that.
Now, this moat thing. That's hilarious.
For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.
Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters
OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.
In this niche you can be irrellevant in months when your models drop behind.
The news that they did that would make them lose most of their revenue pretty fast.
OpenAI has claimed this. But Altman is a pathological liar. There are lots of ways of disguising operating costs as capital costs or R&D.
OpenAI models are already of the most expensive, they don’t have a lot of levers to pull.
I feel like people overuse this criticism. That's not the only way that companies with a lot of revenue lose money. And this isn't at all what OpenAI is doing, at least from their customers' perspective. It's not like customers are subscribing to ChatGPT simply because it gives them something they were going to buy anyway for cheaper.
It’s ok to not buy into the vision or think it’s impossible. But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
This comparison is always used when people are trying to hype something. For every "iPhone" there are thousands of failures
> But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
You're acting as-if OpenAI is still the only player in this space. OpenAI has plenty of competitors who can deliver similar models for cheaper. Gemini 2.5 is an excellent and affordable model and Google has a substantially better capacity to scale because of a multi-year investment in its TPUs.
Whatever first mover advantage OpenAI had has been quickly eliminated, they've lost a lot of their talent, and the chief hypothesis they used to attract the capital they've raised so far is utterly wrong. VCs would be mad to be continuing to pump money into OpenAI just to extend their runway -- at 5 Bln losses per year they need to actually consider cost, especially when their frontier releases are only marginal improvements over competitors.
... this is a bubble despite the promise of the technology and anyone paying attention can see it. For all of the dumb money employed in this space to make it out alive, we'll have to at least see a fairly strong form of AGI developed, and by that point the tech will be threatening the general economic stability of the US consumer.
Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?
When the Internet was developed they didn't imagine the world wide Web.
When cars started to get popular people still thought there would be those who are going to stick with horses.
I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.
Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.
The world is changing and that is scary.
This makes me want to invest in malpractice lawyers, not OpenAI
I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.
Oh we know: https://pmc.ncbi.nlm.nih.gov/articles/PMC11006786/
The article could just as easily be about “Delayed diagnosis of a transient ischemic attack caused by talking to some rando on Reddit” and it would be just as (non) newsworthy.
AI isn't going to be the world changing, AGI, that was sold to the public. Instead, it will simply be another B2B SaaS product. Useful, for sure. Even profitable for startups.
But "take over the world" good? Unlikely.
The fact that people know Coca Cola doesnt mean they drink it.
That name recognition made Coca Cola into a very successful global corporation.
The names don't even matter when everything is baked in.
Slack? Zoom? Teams?
I'm sure you'd get a somewhat uniform distribution.
Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.
Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.
1: https://www.techpolicy.press/transcript-senate-judiciary-sub...
The only thing OpenAI has right now is the ChatGPT name, which has become THE word for modern LLMs among lay people.
Anecdotally, I've switched to Gemini as my daily driver for complex coding tasks. I prefer Claude's cleaner code, but it is less capable at difficult problems, and Anthropic's servers are unreliable.
The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.
ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
We do appear to be hitting a cap on the current generation of auto-regressive LLMs, but this isn't a surprise to anyone on the frontier. The leaked conversations between Ilya, Sam and Elon from the early OpenAI days acknowledge they didn't have a clue as to architecture, only that scale was the key to making experiments even possible. No one expected this generation of LLMs to make it nearly this far. There's a general feeling of "quiet before the storm" in the industry, in anticipation of an architecture/training breakthrough, with a focus on more agentic, RL-centric training methods. But it's going to take a while for anyone to prove out an architecture sufficiently, train it at scale to be competitive with SOTA LLMs and perform enough post training, validation and red-teamint to be comfortable releasing to the public.
Current LLMs are years and hundreds of millions of dollars of training in. That's a very high bar for a new architecture, even if it significantly improves on LLMs.
Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.
Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.
> taken it from a toy to genuinely insanely useful.
Really?
Why? I paid for Claude for a while, but with Deepseek, Gemini and the free hits on Mistral, ChatGPT, Claude and Perplexity I'm not sure why I would now. This is anecdotal of course, but I'm very rarely unique in my behaviour. I think the best the subscription companies can hope for is that their subscribers don't realize that Deepseek and Gemini can basically do all you need for free.
I cannot stress this enough: if you know what Deepseek, Claude, Mistral, and Perplexity are, you are not a typical consumer.
Arguably, if you have used a single one of those brands you are not a typical consumer.
The vast majority of people have used ChatGPT and nothing else, except maybe clicking on Gemini or Meta AI by accident.
They might not “know” the brand as well as ChatGPT, but the average consumer has definitely been exposed to those at the very least.
DeepSeek also made a lot of noise, to the point that, anecdotally, I’ve seen a lot of people outside of tech using it.
Market share of OpenAI is like 90%+.
Source? I've seen 10 to 20% [1][2].
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
I probably need to clarify what I'm talking about, so that peeps like @JumpCrisscross can get a better grasp of it.
I do not mean the total market share of the category of businesses that could be labeled as "AI companies", like Microsoft or NVIDIA, on your first link.
I will not talk about your second link because it does not seem to make sense within the context of this conversation (zero mentions or references to market share).
What I mean is:
* The main product that OpenAI sells is AI models (GPT-4o, etc...)
* OpenAI does not make hardware. OpenAI is not in the business of cloud infrastructure. OpenAI is not in the business of selling smartphones. A comparison between OpenAI and any of those companies would only make sense for someone with a very casual understanding of this topic. I can think of someone, perhaps, who only used ChatGPT a couple times and inferred it was made by Apple because it was there on its phone. This discussion calls for a deeper understanding of what OpenAI is.
* Other examples of companies that sell their own AI models, and thus compete directly with OpenAI in the same market that OpenAI operates by taking a look at their products and services, are Anthropic (w/ Claude), Google (w/ Gemini) and some others ones like Meta and Mistral with open models.
* All those companies/models, together, make up some market that you can put any name you want to it (The AI Model Market TM)
That is the market I'm talking about, and that is the one that I estimated to be 90%+ which was pretty much on point, as usual :).
1: https://gs.statcounter.com/ai-chatbot-market-share
2: https://www.ctol.digital/news/latest-llm-market-share-mar-20...
Your second source doesn’t say what it’s measuring and disclaims itself as from its “‘experimental era’ — a beautiful mess of enthusiasm, caffeine, and user-submitted chaos.” Your first link only measures chatbots.
ChatGPT is a chatbot. OpenAI sells AI models, including via ChatGPT. Among chatbots, sure, 84% per your source. (Not “90%+,” as you stated.) But OpenAI makes more than chatbots, and in the broader AI model market, its lead is far from 80+ percent.
TL; DR It is entirely wrong to say the “market share of OpenAI is like 90%+.”
[1] https://firstpagesage.com/reports/top-generative-ai-chatbots...
>10%-20%
Lmao, not even in Puchal wildest dreams.
One, you suggested OP had not “looked at the actual numbers.” That implies you have. If you were just guessing, that’s misleading.
Two, you misquoted (and perhaps misunderstand) a statistic that doesn’t match your claim. Even in your last comment, you defined the market as “companies that sell their own AI models” before doubling down on the chatbot-only figure.
> not even in Puchal wildest dreams
Okay, so what’s your source? Because so far you’ve put forward two sources, a retracted one and one that measures a single product that you went ahead and misquoted.
OpenAI trained GPT-4.1 and 4.5—both originally intended to be GPT-5 but they were considered disappointments, which is why they were named differently. Did they really believe that scaling the number of parameters would continue indefinitely without diminishing returns? Not only is there no moat, but there's also no reasonable path forward with this architecture for an actual breakthrough.
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
I've been feeling for some time now that we're sort of in the Vietnam War era of the tech industry.
I feel a strong urge to have more "ok, so where do we go from here?" and "what does a tech industry that promotes net good actually look like?" internal discourse in the community of practice, and some sort of ethical social contract for software engineering.
The open source movement has been fabulous and sometimes adjacent to or one aspect of these concerns, but really we need a movement for socially conscious and responsible software.
We need a tech counter-culture. We had one once, but now we need one.
But there are still plenty of mission-focused technology non-profits out there. Many of which have lasted decades. For example: Linux Foundation, Internet Archive, Mozilla, Wikimedia, Free Software Foundation, and Python Software Foundation.
Don't get me wrong, I'm also disappointed in the direction and actions of big tech, but I don't think it's fair to dismiss the non-profit foundations. They aren't worth a trillion dollars, however they are still doing good and important work.
> Sam’s Letter to Employees.
> OpenAI is not a normal company and never will be.
Where did I hear something like that before...
> Founders' IPO Letter
> Google is not a conventional company. We do not intend to become one.
I wonder if it's intentional or perhaps some AI-assisted regurgitation prompted by "write me a successful letter to introduce a new corporate structure of a tech company".
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
Whether they are a net positive or a net negative is arguable. If it's a net negative, then unleashing them to the masses was maybe the danger itself.
There is a lot to criticize about OpenAI and Sama, but this isn't it.
So were do I vote? How do I became a candidate to be a representative or a delegate of voters? I assume every single human is eligible for both, as OpenAI serves the humanity?
Edit: also apparently known as contronym.
It generally means broadening access to something. Finance loves democratising access to stupid things, for example.
> word is a homonym of its antonym?
Inflammable in common use.
Musk claimed Fraud, but never asked for his money back in the brief. Could it be his intentions were to limit OpenAI to donations thereby sucking the oxygen out of the venture capital space to fund Xai's Grok?
Musk claimed he donated $100mil, later in a CNBC interview, he said $50-mil. TechCrunch suggests it was way less.
Speakingof humanitarian, how about this 600lbs Oxymoron in the room: A Boston University mathematician has now tracked an estimated 10,000 deaths linked to the Musk's destruction of USAID programs, many of which provided basic health services to vulnerable populations. He may have a death count on his reume in the coming year.
Non profits has regulation than publicly traded companies. Each quarterly filings is like a colonoscopy with Sorbonne Oxley rules etc. Non profits just file a tax statement. Did you know the Chirch of Scientology is a non-profit.
He's a symptom of a problem. He's not actually the problem.
We know it's a sword. And there's war, yadda yadda. However, let's do the cultivating thing instead.
What other AI players we need to convince?
More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear that AI leadership probably won't be dominated by one company, progress of "frontier models" is stalling while costs are spiraling, and 'Foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.
Look forward to re-living that shift from life-changing community resource to scammy and user-hostile
Is OpenAI making a profit?
ru552•5h ago
anxman•5h ago
babelfish•4h ago