Would anyone like to found a startup doing high-security embedded systems infrastructure? Peter at my username dot com if you’d like to connect.
I’d argue the other way around: 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
Beware the sigmoidal curve, though. Growth is exponential till it’s not.
In what way does it suggest that? What level of growth is evidence that a product is universally useful?
That seems like pretty strong evidence that it is generally, if not universally, useful to everyone given the opportunity.
"does the numbers add up?"
this article is about NUMBERS regarding return-on-investment / etc
"useful" is so vague so it's too 'useless' to the discussion here... I'm not sure why everyone here is parrotting that like gpt hallucination
The number may not actually be too accurate - but I imagine it’s also paired with what another commentator has said - OpenAI is basically giving their product to companies and the companies are making the employees log in and use it in some way - it’s not natural growth in any sense of the word.
Anyway, I bet it will be really useful for cool stuff if it can ever run on my laptop!
of course, it's better than "this is so crap no one would buy it" -- but for investors, they want to know: "if I put X dollars now, would I get 10*X dollars or 1/10 X dollars?"
it's weird that all these comments on "usefulness" doesn't even attempt to explain whether the numbers add up ok or not
Not sure how much one should expect or deserve switching from a free search engine to a free chatbot.
If you care about search, use Kagi [1].
[1] https://kagi.com
The blockchain/bitcoin bros tried the same marketing spin. "Bitcoin will end poverty once we get it into everyone's hands." When that started slipping, NFTs will save us all.
Yeah. Sure. Been there. Done that. Just needs "more investment"... and then more... then more... all because of self reported "growth".
The latest LLMs are extraordinarily useful life agents as is. Most people would benefit from using them.
It'd be like pretending it's either water or education (pick one). The answer is both and you don't have to pick one or the other in reality at all. The entities trying to solve each aspect are typically different organization anyway.
hmm maybe that "would benefit" is a bit too vague?
Someone who doesn't have access to clean water and stable food will not benefit from this, nor will powers at be that "make it available" will actually improve their lives. It's already apparent, the tech nerds of the late 90s and early 2000s were NOT the good guys. Being good at computers does not make you a good person. The business model for AI makes zero sense when you have real world experience. Without massive, complete social and economic absolute changes, it won't work out. And for those championing that change, what makes you think you'll be the special comrade brought up the ranks to benefit as the truest of believers?
Sorry, but this shit is really starting to rub me the wrong way, especially with the massive bubble investment that's growing around all of it. This wont be good. The housing collapsed sucked. The same pattern is emerging and I'm getting a bad, bad feeling this will make the housing collapse look like nothing due to long term ramifications.
This, maybe not. But arguing someone without access to clean water or a stable food supply can't benefit from any consumer tech ignores that many of those same people will choose to spend their money on a mobile phone over stable food.
"Ah, you're absolutely right! Have you tried looking in the shop?"
I'm not sure i understand the reasoning. lots of people use a thing, so everyone should?
For OpenAI I think the problem is that if eventually browsers, operating systems, phones, word processors [some other system people already use and/or pay for] integrate some form of generative AI that is good enough - and an integrated AI can be a lot less capable than the cutting edge to win, what will be the market for a stand alone AI for the general public.
There will always be a market for professional products, cutting edge research and coding tools, but I don’t think that makes a trillion dollar company.
This doesn’t make any sense. Popular is not the same as useful. You’d have a more compelling argument if you included data showing that all this increased LLM usage has had some kind of impact on productivity metrics.
Instead, some studies have shown that LLMs are making professionals less productive:
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
If you are using a service weekly for a long period, you find it useful.
>You’d have a more compelling argument if you included data showing that all this increased LLM usage has had some kind of impact on productivity metrics.
Why would you need to do that? Why is a vague (in this instance) notion of 'productivity' the only measure of usefulness? ChatGPT (not the API, just the app) processes over 2.6 B messages every single day. Most of these (1.9B) are for Non work purposes. So what 'productivity' would you even be measuring here ? Do you think everything that doesn't have to do with work is useless ? I hope not, because you'd be wrong.
If something makes you laugh consistently, it's useful. If it makes you happy, it's useful. 'Productivity' is not even close to being the be-all and end-all of usefulness.
[0] https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
Do alcoholics find their daily usage of alcohol really useful? You can of course make a case for this, but it's quite a stretch. I think people use stuff weekly for all sorts of reasons besides usefulness for the most common interpretation of the word.
Of course they do. They use it to get drunk and to avoid withdrawals. You're trying to confuse useful with productive. Being productive does make a difference, though, because if something isn't productive it doesn't generate enough cash to buy more of it - you have to pull the cash from somewhere else.
So I think your feeling is correct, although your argument is wrong. Buying gas for your car is productive, because it gets you to work which gets you money to pay for more gas and other things (like alcohol.) Buying alcohol is not productive, and that means that people can't pay too much for it.
The difference is that besides being useful, alcohol is actively harmful.
Any idiot can sell a dollar’s worth of value for 90 cents.
So maybe giving away more and more free stuff is good for growth? The product is excellent, ChatGPT is still my favorite but the competition isn't that behind. If fact, I started liking Grok better for most tasks
Only about 5% see enough value to drop $20 a month... It's like VR and AR, if people get a headset for free they'll use them every now and then, but virtually nobody wants to drop money on these.
LLMs have already been commodified
Selling 100B worth of stocks for anything close to 100B is not possible. That volume would mini-crash the entire exchange.
Nasdaq trades about half a trillion dollars a day [1]. Even if Musk were an idiot and dumped $100bn in one day, it would crash Tesla's stock price, not the Nasdaq.
If Musk wanted to give OpenAI $100bn, the best way to do it would be to (a) borrow against his shares or (b) given OpenAI his (non-voting) shares.
[1] https://www.nasdaqtrader.com/Trader.aspx?id=DailyMarketSumma...
Have a search for "chatfishing".
Always a good dating strategy.
That aside, his math is wrong
edit: Aussies and kiwis too!
Why can't OpenAI keep projecting/promising massive data centre growth year after year, fail to deliver, and keep making Number Go Up?
Because eventually, Nvidia will run out of money, so the incestuous loop between Nvidia funding AI entities, who then use those funds to buy Nvidia chips, artificially propping up Nvidia's stock price, will eventually end and poof.
Competing forces are the market's insatiable need for growth every quarter, and other countries also chasing AIs and will not slow down if other countries, like the US, do slow down.
I've been thinking about American exceptionalism - they way it is head and shoulders above Europe and the developed world in terms of GDP growth, market returns, start up successes etc. and what might be the root of this success. And I'm starting to think that, apart from various mild genuine effects, it is also a sequence of circular self-fulfilling prophecies.
Let's say you're a sophisticated startup and you want some funding. Where do you go? US of course - it has the easiest access to capital. It does so presumably because US venture funds have an easier time raising funds. And that's presumably because of their track record of making money for investors - real, or at least perceived. They invest in these startups and they exit at a profit, because US companies have better valuations than elsewhere, so at IPO investors lap up the shares and the VCs make money. It's easy to find buyers for US stocks because they're always going up. In turn, they're going up because, well, there's lots of investors. It's much easier to raise billions for data centres and fairy dust because investors are in awe of what can be done with the money and anyway line always go up. Stocks like TSLA have valuations you couldn't justify elsewhere. Maybe because they will build robot AI rocket taxis, or maybe because the collective American Allure means valuations are just high.
The beauty of this arrangement is that the elements are entangled in a complex web of financial interdependency. If you think about these things in isolation, you wouldn't conclude there's anything unusual. US VC funding is so good because there's a lot of capital - lucky them. This thought of circularity only struck me when trying to think of the root cause - the nuclear set of elements that drive it. And I concluded any reason I can think of is eventually recursive.
I'm not saying America is just dumb luck kept together by spittle, of course there are structural advantages the US has. I'm just not sure it really is that much better an economic machine than other similar countries.
One difference to a Ponzi scheme is that you might actually hit a stable level and stay there rather than crash and burn. So it's more like a collective investment into a lottery. OpenAI might burn $400bn and achieve singularity, then proceed to own the rest of the world.
But I can't shake the feeling that a lot of recent US growth is a bit of smoke and mirrors. After adjusting for tech, US indices didn't outperform European ones post GFC, IIRC. Much of its growth this year is AI, financed presumably by half the world and maintained by sky-high valuations. And no one says "check" because, well, it's the US and the line always go up.
The Oracle deal structure: OpenAI pays ~$30B/year in rental fees starting fiscal 2027/2028 [2], ramping up over 5 years as capacity comes online. Not "$400B in 12 months."
The deals are structured as staged vendor financing: - NVIDIA "invests" $10B per gigawatt milestone, gets paid back through chip purchases [3] - AMD gives OpenAI warrants for 160M shares (~10% equity) that vest as chips deploy [4] - As one analyst noted: "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia" [3]
This is circular vendor financing where suppliers extend credit betting on OpenAI's growth. It's unusual and potentially fragile, but it's not "OpenAI needs $400B cash they don't have."
Zitron asks: "Does OpenAI have $400B in cash?"
The actual question: "Can OpenAI grow revenue from $13B to $60B+ to cover lease payments by 2028-2029?"
The first question is nonsensical given deal structure. The second is the actual bet everyone's making.
His core thesis - "OpenAI literally cannot afford these deals therefore fraud" - fails because he fundamentally misunderstands how the deals work. The real questions are about execution timelines and revenue growth projections, not about OpenAI needing hundreds of billions in cash right now.
There's probably a good critical piece to write about whether these vendor financing bets will pay off, but this isn't it.
[1] https://www.cnbc.com/2025/09/23/openai-first-data-center-in-...
[2] https://w.media/openai-to-rent-4-5-gw-of-data-center-power-f...
[3] https://www.cnbc.com/2025/09/22/nvidia-openai-data-center.ht...
[4] https://techcrunch.com/2025/10/06/amd-to-supply-6gw-of-compu...
It is bagholders all the way down[1]! The final bagholder will be the taxpayer/pension holder.
These companies are doing all sorts of round tripping on top of propping up the economy on a foundation of fake revenue on purpose so that when it does some crumbling down they can go cry to the feds "help! we are far too big to fail, the fate of the nation depends on us getting bailed out at taxpayer expense."
I wrote a post about his insistence that the "cost of inference" is going up. https://crespo.business/posts/cost-of-inference/
To be clear, I do expect that the bubble will burst at some point (my bet is 2028/2029) — but that's due to dynamics between markets and new tech. The tech itself is solid, even in the current form — but when there's a lot of money to make you tend to observe repeatable social patterns that often lead to overvaluing of the stuff in question.
Assuming their growth rate is getting close to stabilizing and will be at ~100% for 3 years to end of 2028 - that'd be $104B in revenue, on 6.4B WAUs.
I wouldn't bank on either of those numbers - but Oracle and Nvidia kind of need to bank on it to keep their stocks pumped.
Their growth decay is around 20% every 2 months - meaning - by this time next year, they could be closer to 1.2B WAUs than to 1.6B WAUs, and the following year they could be closer to 1.4B WAUs than to 3.2B WAUs.
Impressive, for sure, but still well bellow Google and Facebook, revenue much lower and growth probably even.
And of course I might pay $20/month for ChatGPT and another $20/month for sora (or some hypothetical future b2c app)
Codex is my current favorite code reviewer (compare to bug bot and others), although others have had pretty different experiences. Codex is also my current favorite programming model (although it's quite reasonable to prefer Claude code with sonnet 4.5). I would happily encourage my employer to spend even more on OpenAI tools, and this is ignoring the API spend that we have (also currently increasing)
"OpenAI cannot actually afford to pay $60 billion / year" the article states with confidence. But that's the level of revenue they'd be pulling in from their existing free users if monetized as effectively as Facebook or Google. No user growth needed.
And it seems this isn't far off, given the Walmart deal. Of course they'll start off with unobtrusive ad formats used only in situations where the user has definite purchase intent, to make the feature acceptable to the users, and then tighten the screws over time.
AGI is absolutely a national security concern. Despite it being an enormous number, it'll happen. It may not be earmarked for OpenAI, but the US is going to ensure that the energy capability is there.
This may well be the PR pivot that's to come once it becomes clear that taxpayer funding is needed to plug any financing shortfalls for the industry - it's "too big to let fail". It won't all go to OpenAI, but be distributed across a consortium of other politically connected corps: Oracle, Nvidia/Intel, Microsoft, Meta and whoever else.
These 6 companies are using only a small portion of their own cash reserves to invest, and using private credit for the rest. Meta is getting a $30 billion loan from PIMCO and Blue Owl for a datacenter [0], which they could easily pay for out of their own pocket. There are also many datacenters that are being funded through asset-backed securities or commercial mortgage-backed securities [0], the market for which can quickly collapse if expected income doesn't materialize, leading to mortgage defaults, as in 2008.
[0] https://www.reuters.com/legal/transactional/meta-set-clinch-...
[1] https://www.etftrends.com/etf-strategist-channel/securitizin...
> Despite it being an enormous number, it'll happen.
Care to share your crystal ball ?
We're doing a pretty shit job of ensuring that today. Capacity is already intensely strained, and the govt seems to be decelerating investment into power capacity growth, if anything
They Don't Have the Money: OpenAI Edition
In what universe are OpenAI stock, gold and physical cash the only assets?
To put it another way, I don't know anything but I could probably make a '1 GW' datacenter with a single 6502 and a giant bank of resistors.
Also the workloads completely change over time as racks get retired and replaced, so it doesn't mean much.
But you can basically assume with GB200s right now 1GW is ~5exaflops of compute depending on precision type and my maths being correct!
Look at next gen Rubin with it's CPX co-processor chip to see things getting much weirder & more specialized. There for prefilling long contexts, which is compute intensive:
> Something has to give, and that something in the Nvidia product line is now called the "Rubin" CPX GPU accelerator, which is aimed specifically at parts of the inference workload that do not require high bandwidth memory but do need lots of compute and, increasingly, the ability to process video formats for both input and output as part of the AI workflow.
https://www.nextplatform.com/2025/09/11/nvidia-disaggregates...
To confirm what you are saying, there is no coherent unifying way to measure what's getting built other than by power consumption. Some of that budget will go to memory, some to compute (some to interconnect, some to storage), and it's too early to say what ratio each may have, to even know what ratios of compute:memory we're heading towards (and one size won't fit all problems).
Perhaps we end up abandoning HBM & dram! Maybe the future belongs to high bandwidth flash! Maybe with it's own Computational Storage! Trying to use figures like flops or bandwidth is applying today's answers to a future that might get weirder on us. https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...
You have a lot more things in a DC than just GPUs consuming power and producing heat. GPUs are the big ones, sure, but after a while, switches, firewalls, storage units, other servers and so one all contribute to the power footprint significantly. A big small packet high throughput firewall packs a surprisingly high amount of compute capacity, eats a surprising amount of power and generates a lot of heat. Oh and it costs a couple of cars in total.
And that's the important abstraction / simplification you get when you start running hardware at scale. Your limitation is not necessarily TFlops, GHz or GB per cubic meter. It is easy to cram a crapton of those into a small place.
The main problem after a while is the ability to put enough power into the building and to move the heat out of it again. It sure would be easy to put a lot of resistors into a place to make a lot of power consumption. Hamburg Energy is currently building just that to bleed off excess solar power into the grid heating.
It's problematic to connect that to the 10kv power grid safely and to move the heat away from the system fast.
Because once you reach 1.21 GW the AI begins to learn at a geometric rate. Which means we finally get to AGI and OpenAI gets their $400B return.
I'm not saying there's no bubble, and I personally anticipate a lot of turmoil in the next year, but monetisation of that would be the most primitive way of earning a lot of money. If anyone is dead man walking it's Google. For better or worse, Chatgpt has become to AI what Google was to search, even though I think Gemini is also good or even better. I also have my own doubts about the value of LLMs because I've already experienced a lot of caveats with the stuff it gives you. But at the same time, as long as you don't believe it blindly, getting started with something new has never been easier. If you don't see value in that, I don't know what to tell you.
Google definitely has the better model right now, but I think ChatGPT is already well on its way to becoming to AI what Google was to search.
ChatGPT is a household name at this point. Any non tech person I ask or talk about AI with it's default to be assumed it's ChatGPT. "ChatGPT" has become synonymous with "AI" for the average population, much in the same way "Google it" meant to perform an internet search.
So ChatGPT already has the popular brand. I think people are sleeping on Google though. They have a hardware advantage and aren't reliant on Nvidia, and have way more experience than OpenAI in building out compute and with ML, Google has been an "AI Company" since forever. Google's problem if they lose won't be because of tech or an inferior model, it will be because they absolutely suck at making products. What Google puts out always feels like a research project made public because someone inside thought it was cool enough to share. There's not a whole lot of product strategy or cohesion across the Google ecosystem.
Thing that make me skip this specific narrative:
- There's some heavy-handed reaching to get to $400B next 12 months: guesstimate $50B = 1 GW of capacity, then list out 3.3 gigawatts across Broadcom chip purchases, Nvidia, and AMD
- OpenAI is far better positioned than any of the obvious failures I foresaw in my 37 years on this rock. It's very, very, hard to fuck up to the point you go out of business.
- Ed is repeating narratives instead of facts ("what did they spend that money on!? GPT-5 was a big let down!" -- i.e. he remembers the chatgpt.com router discourse, and missed that it was the first OpenAI release that could get the $30-50/day/engineer in spend we've been sending to Anthropic)
I wouldn't be surprised if the cost came down by at least one order of magnitude, two if NVidia and others adjust their margin expectations. If the bet is that OpenAI can ship crappy datacenters with crappy connectivity/latency characteristics in places with cheap/existing power - then that seems at least somewhat plausible.
OpenAI burning 40 billion dollars on datacenters in the next 1 year is almost guaranteed. Modern datacenter facilities are carefully engineered for uptime, I don't think OpenAI cares about rack uptime or even facility uptime at this scale.
Open source models like deepseek and llama 3 are rapidly catching up, if I can get 90% of the functionality for significantly less ( or free if I want to use my own GPU), what value does open AI really have .
I'm a paid subscriber of open AI, but it's really just a matter of convenience. The app is really good, and I find it's really great for double checking some of my math. However I don't know how they're going to ever truly become a corporate necessity at the prices they're going to need to bill at to become profitable.
Then again, open AI is obviously ran by some of the smartest people on planet Earth, with other extremely smart people giving them tons of money, so I can be completely wrong here
Unless the actual race is to create an AI Employee that operates and can deliver work without constant supervision. At that level, of course it would be cheaper to pay $2,000 a month straight to Open AI vs hiring a junior SWE
I agree with this... for now. But the hosted commercial models aren't widening the gap as far as I can tell, if anything it appears to be narrowing.
And if the relative delta doesn't increase somehow I don't see any way in which the "AI race" doesn't end in a situation where locally run LLMs on relatively cheap hardware end up being good enough for virtually everyone.
Which in many ways is the best possible outcome, except for the likely severe economic effects when the bubble bursts on the commercial AI side.
Or is this the case of every HN discussion where what you do is “actual work” and what other people do is “toying around?”
Just because open source models are almost as good, doesn't mean you can underestimate the convenience factor.
Both can be true: we're in an AI bubble, and the large incumbents will capture most of the value/be difficult to unseat.
https://news.ycombinator.com/item?id=42392302
(Before discussion of your comment devolves into nonsense about this.)
At one point you could get a Netflix subscription and it was convenient enough that people were pirating less. Now there's so many subscription services, we're basically back to cable packages, paying ever increasing amounts and potentially still seeing ads. I know I'm pirating a lot more again.
Uber vs cabs, Airbnb vs hotels - We've seen it time and time again, once the VC cashflow/infinite stonk growth dries up and they need to figure out how to monetize, the service becomes worse and people start looking for alternatives again.
I also wonder whether, similar to bitcoin mining, these things end up on specialist ASICS and before we know it a medium tier mobile phone is running your own local models.
A good number of people used to pay for email. Now a tiny fraction does. It all hangs on wbether OpenAI can figure out how to get ad revenue without people moving to a free competitor without them - and there will be plenty of those.
People love to quote Dropbox ignoring all of the YC companies that are zombies or outright failed. Just looking at the ones that have gone public.
https://medium.com/@Arakunrin/the-post-ipo-performance-of-y-...
When there's real money to be made investing in YC is off limits to the public: https://jaredheyman.medium.com/on-the-176-annual-return-of-a...
They also didn’t have the massive fixed cost outlays nor did they have negative unit economics that OpenAI has.
They pay for the hardware and electricity /s.
Plenty of people don't. That's an enduring advantage of using GPT over anything locally hosted.
Catching up for how long though? Large models are very expensive to train, and the only reason any of them are open is because ostensibly for-profit companies are absorbing the enormous costs on behalf of the open source scene. What's the plan B when those unprofitable companies (or divisions within companies) pull up the ladder behind them in pursuit of profits?
Catching up enough to execute a business successfully on the open source (and free as in free beer) alternatives to OpenAI. Once you have a single model that works, you can bootstrap your own internal ML infra for just that one use case and grow from there.
Nobody is advocating switching their coding agents to open source (yet), but that's not the bulk of the tokens in companies that have automated workflows integrated into their business.
That right there is why they are valuable. Most people are absolutely incompetent when it comes to IT. That's why no one you meet in the real world uses ad blockers. OpenAI secured their position in the mind share of the masses. All they had to do to become the next google was find a way to force ads down the throats of their users. Instead they opted for the inflated bubble and scam investors strategy. Rookie mistake.
The reality is they're a paid service, and even if they 10x their prices they're still in the red.
Consumers do actually care about price. They will easily, and quickly, move to a cheaper service. There's no lock in here.
The hardware required to run something like deepseek / kimi / glm locally at any speed fast enough for coding is probably around $50,000. You need hundreds of gigabytes of fast VRAM foreto run models that can come anywhere close to openai or anthropic.
While this post is full of conjecture, and somewhat unrelated to LLMs, but not their economics - I wonder how the insane capex is going to be justified even if AI becomes fully capable of replacing salaried professionals, they'll still end up paying much much more than what it'd have cost to just hire that armies of professionals for decades.
I think it's magnitudes less, actually.
North American grids are starving for electricity right now.
Someone ought to do a deep dive into how much actual total excess power capacity we have today (that could feasibly be used by data center megacampuses), and how much capacity is coming online and when.
Power plants are massively slow undertakings.
All these datacenters deals seem to be making an assumption that capacity will magically appear in time, and/or that competition for it doesn't exist.
There are many people in the USA who don’t overly care about technology but might care a lot about the economic risks of overly aggressively chasing strong AI capabilities.
I am forwarding this article to a few friends and family members.
alberth•3h ago
chilipepperhott•3h ago
kachapopopow•3h ago
This also assumes that intelligence continues to scale with compute which is not a given.
sillysaurusx•2h ago
Isn’t it? Evidence seems to suggest that the more compute you throw at a problem, the smarter the system behaves. Sure, it’s not a given, but it seems plausible.
deadbabe•2h ago
But human brains are small and require far less energy to be very generally intelligent. So clearly, there must be a better way to achieve this AGI shit. Preferably something that runs locally in the palm of your hand.
kachapopopow•2h ago
JumpCrisscross•1h ago
Given doesn't mean proven, it means accepted as true. We can give variables fixed values, for example.
Entire classes of proofs, moreover, prove something cannot be true because if you assume it is you get a paradox or nonsense.
nutjob2•2h ago
That word is carrying a heavy load. There's no evidence that scaling works indefinitely on this particular sort of problem.
In fact there is no evidence that scaling solves computing problems generally.
In more narrow fields more compute gets better results but that niche is not so large.
B56b•2h ago
IsTom•2h ago
rediguanayum•2h ago
gkoberger•2h ago
But my personal belief is Sam Altman has a singular goal: AGI. Everything else keeps the lights on.
sho_hn•2h ago
My impression is that I hear a lot more about basic research from the competing high-profile labs, while OpenAI feels focused on their established stable of products. They also had high-profile researchers leave. Does OpenAI still have a culture looking for the next breakthroughs? How does their brain trust rank?
Analemma_•2h ago
thelastgallon•2h ago
Of course, there wouldn't be many people who don't want to be trillionaires. Rare exceptions[1]. But these are the people with means to get there.
[1]: No means NO - Do you want a one million dollar answer NO!: https://www.youtube.com/watch?v=GtWC4X628Ek
wkat4242•1h ago
kachapopopow•1h ago
codyb•1h ago
gkoberger•1h ago
JumpCrisscross•1h ago
He's clearly extracting way more money from OpenAI and its ecosystem than he could if he had traditional equity in a traditionally-structured start-up.
Analemma_•16m ago
timeon•2h ago
JumpCrisscross•2h ago
I’m increasingly convinced this is AI’s public relations strategy.
When it comes to talking to customers and investors, AGI doesn’t come up. At fireside chats, AGI doesn’t come up.
Then these guys go on CNBC or whatnot and it’s only about AGI.
evandrofisico•3h ago
cma•2h ago
I'm not sure if OpenAI has been willing to deploy weights to Google infrastructure.
wmf•1h ago