Hindsight says, don't do fraud.
Things working out in the end doesn't make what he did not a crime at the time. He was a common paper hanger, albeit with billions instead.
Morally speaking, no. Practically speaking, it does. He would not have seen jail time.
It's literally exactly what Shkreli got 7 years for, even after repaying investors. If you defraud money from someone and put it back before they find out, it's still a crime. Fraud is about intent more than anything else, and they proved it for SBF.
Some who take on unreasonable risk will be among the most successful people alive. Most will lose eventually, long before you hear about them if they keep too many taking crazy risks.
Who is a great genius, and is who is just winning at "The Martingale entrepreneurial strategy"?
It's not just about surviving a downtown and unforseen circumstances with some luck (like the sibling talking about FedEx barely making it). Tesla, for example, was famously extremely close to bankruptcy.
But SBF got into the situation he was in due to his egregious fraud. The accounting at FTX was a criminal joke, with multiple sets of books, bypassable controls, outright fake numbers. My guess is that if SBF had survived that particular BTC downturn that his extreme hubris and willingness to commit fraud would have eventually done him in - downturns always happen at some point, and his brazenness in his criminal enterprise showed no signs of learning from mistakes.
Sure, all hugely successful companies have a ton of luck involved. But I think it's a mistake to pretend that SBF was just done in by bad timing, or that all companies do what he did. His empire collapse was pretty inevitable IMO if you look at what a clown show FTX was under the covers.
Whether by negligence or intent, FTX was arranged so that they couldn't go bust without stealing.
That number isn’t 0
Does no one still remember that tether continually stalled audits FOR YEARS in the face of increasing scrutiny?
We should all try to remember this the next time we vote to cut taxes on billionaires.
"what do you call a rouge trader that makes money?"
"Managing director"
If someone makes money on time, everything is forgiven. Money blinds us.
The issue wasn't that crypto markets in general were down at that point; the issue was they were doing frauds.
If the liquidators had perfect hindsight, they'd be trading their own money. Not cleaning up other people's messes.
Their job is to be responsible and follow procedure.
And it's cash from asset managers. Its not 10Bn worth of compute time from Microsoft or Google.
Much like any other investment. What do you think makes this more speculative than any other investment?
Deficit spending doesn't create new money. Deficit spending borrows existing money from the population and institutions in exchange for a promise of future government revenues. The Fed does not participate in treasury primary auctions and does not monetize the debt as a means of funding government operations.
If you printed new money to pay for the government, you wouldn't have a debt. That's double-counting. Not to mention the debt is twice as large as the entire money supply so what you're suggesting isn't even physically possible. It would be inflationary to simply print new money to finance spending, which is exactly why it's not done.
[edit] Also the debt limit is a stupid concept that's likely unconstitutional. Congress authorizes spending, meaningful debate over paying for it by adjusting the debt limit likely falls afoul of the 14th amendment's public debt clause. But yeah I mean the debt limit goes up because the government spends more money than it takes in, so it needs to borrow more each year.
The Fed doesn't have nearly as much control as folks think.
The Fed directly created money during QE and they are directly destroying it during QT. There's a net add, but that's mostly because the economy is growing, which creates new demand for money as expressed by demand for debt.
The money supply staying fixed or shrinking is a non-goal anyways. It's irrelevant. What matters is inflation as measured from the change in actual prices.
That new money is different from the new money the central bank creates to push interest rates down. That later one the US has been destroying. But both do many of the same things (but not all).
1) Will I (and others) be able to get a H100 (or similar) when the bubble pops, and would that lead to new innovations from the GPU poor?
2) Will China take the lead in AI as they are less "capitalistic" with the demands for outsized returns on their investment compared to US companies, and they may be more willing to continue to sink money into AI despite possible market returns?
Some will be used a lot will be written off and tossed away.
H100s will not age this well. It's not like owning old railroad tracks, it's like owning a fleet of 1992 Ford Taurus's. They'll be quickly obsolete and uneconomical in just a few years as semiconductor manufacturing continues to improve.
[Voted down by the cash cabal! Arise! Knowledge workers of the world, you have nothing to lose but your SPARE CHANGE!]
A man looks at economics. Understands nothing. Thinks it must be all fake and made up. He must be so smart for seeing it through!
Btw there's a decentish board game called Modern Art based around the pricing of art with no intrinsic value.
A company has agency; it seeks to add economic value to itself over time including changing people’s perceptions.
I don’t see how your comments have any bearing to the point I was making. What am I missing?
How? The market is the one that made the decision to invest. They are not playing musical chairs.
The laws of economics have the kind of inevitability you expect from the laws of physics. Disrespect them at your own peril.
All of whom have a real world standardized thing to exchange for this already
Why do you think this discussion even needs to include the people who don’t have that standardized thing to exchange? If thats what you think
* I’m an equal opportunity critic of comments that are indistinguishable from people yelling into the void with whatever pops into their head. So yes, I’m extremely critical of this very human tendency that isn’t helpful.
Remember, every technology you use today followed this pattern, with winners emerging that absolutely did go on to be extremely profitable for decades.
Most of us remember the .com era. But in the early 1900s there was literally hundreds of automotive startups (actual car companies, and tens of thousands of supplier startups) in the metro-detroit area: https://en.wikipedia.org/wiki/List_of_defunct_automobile_man...
Some of these went on to be absolutely fantastic investments, most didn't. All VCs and people who invest in venture know this pattern.
Everybody involved knows exactly the high risk level of the bets they are making. This is not "dumb" money detached from reality, and the pension funds with a 3% allocation to venture are going to be just fine if all these companies implode, this is just uncorrelated diversification for them. The point of these VC funds is to lose most of the time and win big very rarely.
There will be crashes, and more bubbles in the future. Humans will human. Everything is fine.
Too many normies betting their life savings without understanding this risk in prior bubbles, so we regulated away the ability for non-institutional investors to take venture risk at all.
Some institutions try to achieve this by launching their own cryptocurrencies, but by and large, the market isn't biting.
Clear, testable predictions are possible if you try.
What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.
Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol
The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund
Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...
Video seems to be moving forward fast from what I can tell, but it sounds like the back end cost of compute there is skyrocketing with it raising other questions.
Called efficient compute frontier
They don't even burn it on on AI all the time either: https://openai.com/sam-and-jony/
This is an extraordinary moment.
Computers are now seeing, thinking and understanding.
Despite this unprecedented capability, our experience remains shaped by traditional products and interfaces."
I don't even want to learn about them every line is so exhausting
"We would like to introduce you to the spawn of Johnny Ive and Sam Altman, we're naming him Damien Thorn."
I don't think we're hitting peak of what LLMs can do, at all, yet. Raw performance for one-shot responses, maybe; but there's a ton of room to improve "frameworks of thought", which are what agents and other LLM based workflows are best conceptualized as.
The real question in my mind is whether we will continue to see really good open-source model releases for people to run on their own hardware, or if the companies will become increasingly proprietary as their revenue becomes more clearly tied up in selling inference as a service vs. raising massive amounts of money to pursue AGI.
Initially, first party proprietary solutions are in front.
Then, as the second-party ecosystem matures, they build on highest-performance proprietary solutions.
Then, as second parties monetize, they begin switching to OSS/commodity solutions to lower COGS. And with wider use, these begin to outcompete proprietary solutions on ergonomics and stability (even if not absolute performance).
While Anthropic and OpenAi are incinerating money, why not build on their platforms? As soon as they stop, scales tilt towards an apache/nginx type commoditized backend.
That's still a pretty good deal for an investor: if I give you $15B, you will probably make a lot more than $15B with it. But it does raise questions about when it will simply become infeasible to train the subsequent model generation due to the costs going up so much (even if, in all likelihood, that model would eventually turn a profit).
"probably" is the key word here, this feels like a ponzi scheme to me. What happens when the next model isn't a big enough jump over the last one to repay the investment?
It seems like this already happened with GPT-5. They've hit a wall, so how can they be confident enough to invest ever more money into this?
If model training has truly turned out to be profitable at the end of each cycle, then this company is going to make money hand over fist, and investing money to out compete the competition is the right thing to do.
Most mega corps started out wildly unprofitable due to investing into the core business... until they aren't. It's almost as if people forget the days of Facebook being seen as continually unprofitable. This is how basically all huge tech companies you know today started.
Having experienced Anthropic as a customer, I have a hard time thinking that their inevitable failure (something i'd bet on) will be model/capability-based, that's how bad they suck at every other customer-facing metric.
You think Amazon is frustrating to deal with? Get into a CSR-chat-loop with an uncaring LLM followed up on by an uncaring CSR.
My minimum response time with their customer service is 14 days -- 2 weeks -- while paying 200usd a month.
An LLM could be 'The Great Kreskin' and I would still try to avoid paying for that level of abuse.
“There Is No AI Revolution” - Feb ‘25:
This was the power of Moore's Law, it gave the semiconductor engineers an argument they could use to convince the money-guys to let them raise the capital to build the next fab- see, it's right here in this chart, it says that if we don't do it our competitors will, because this chart shows that it is inevitable. Moore's Law had more of a financial impact than a technological one.
And now we're down to a point where only TSMC is for sure going through with the next fab (as a rough estimate of cost, think 40 billion dollars)- Samsung and Intel are both hemming and hawing and trying to get others to go in with them, because that is an awful lot of money to get the next frontier node. Is Apple (and Nvidia, AMZ, Google, etc.) willing to pay the costs (in delivery delays, higher costs, etc.) to continue to have a second potential supplier around or just bite the bullet and commit to TSMC being the only company that can build a frontier node?
And even if they can make it to the next node (1.4nm/14A), can they get to the one after that?
The implication for AI models is that they can end up like Intel (or AMD, selling off their fab) if they misstep badly enough on one or two nodes in a row. This was the real threat of Deepseek: if they could get frontier models for an order of magnitude cheaper, then the entire economics of this doesn't work. If they can't keep up, then the economics of it might, so long as people are willing to pay more for the value produced by the new models.
He says "You paid $100 million and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume in this cartoonish cartoon example that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model is actually, in this example is actually profitable. What's going on is that at the same time"
notice those are hypothetical numbers and he just asks you to assume that inference is (sufficiently) profitable.
He doesn't actually say they made money by the EoL of some model.
The companies doing foundational video models have stakeholders that don’t want to be associated with what people really want to generate
But they are pushing the space forward and the uncensored and unrestricted video model is coming
When it comes to sexually explicit content in general with adults, all of our laws rely on the human actor existing
FOSTA and SESTA is related to user generated content of humans, for example. They rely on making sure an actual human isnt being exploited and burdening everyone with that enforcement. When everyone can just say “thats AI” nobody’s going to care and platforms will be willing to take that risk of it being true again - or a new hit platform will. That kind of content currently Doesnt exist in large quantities yet, until a video model ungimped can generate it.
Concerns about trafficking only rely on actual humans not entirely new avatars
regarding children there are more restrictions that may already cover this, there is a large market for just adult looking characters though and worries about underage can be tackled independently. or be found entirely futile. not my problem, focus on what you can control. this is whats coming though.
people already dont mind parasocial relationships with generative AI and already pay for that, just add nudity
that said, I'm sure you can imagine that the really illegal, truly, positively sickening and immoral stuff is children-adjacent and you can be 100% sure there are sociopaths doing training runs for the broken people who'll buy the weights.
Additionally, the entire "payment processors leaning on Steam" thing shows that it might be very difficult to monetize a model that's known for generating extremely controversial content. Without monetization, it would be hard for any company to support the training (and potential release) of an unshackled enterprise-grade model.
That article is nothing to do with AI, really.
In the meanwhile, "better data", "better training methods" and "more training compute" are the main ways you can squeeze out more performance juice without increasing the scale. And there are obvious gains to be had there.
Does this apply to Google that is using custom built TPUs while everyone else uses stock Nvidia?
If Google wants anything better than that? They, too, have to wait for the new hardware to arrive. Chips have a lead time - they may be your own designs, but you can't just wish them into existence.
All of the big AI players have profited from Wikipedia, but have they given anything back, or are they just parasites on FOSS and free data?
Probably because you're doing things that are hitting mostly the "well-established" behaviors of these models — the ones that have been stable for at least a full model-generation now, that the AI bigcorps are currently happy keeping stable (since they achieved 100% on some previous benchmark for those behaviors, and changing them now would be a regression per those benchmarks.)
Meanwhile, the AI bigcorps are focusing on extending these models' capabilities at the edge/frontier, to get them to do things they can't currently do. (Mostly this is inside-baseball stuff to "make the model better as a tool for enhancing the model": ever-better domain-specific analysis capabilities, to "logic out" whether training data belongs in the training corpus for some fine-tune; and domain-specific synthesis capabilities, to procedurally generate unbounded amounts of useful fine-tuning corpus for specific tasks, ala AlphaZero playing unbounded amounts of Go games against itself to learn on.)
This means that the models are getting constantly bigger. And this is unsustainable. So, obviously, the goal here is to go through this as a transitionary bootstrap phase, to reach some goal that allows the size of the models to be reduced.
IMHO these models will mostly stay stable-looking for their established consumer-facing use-cases, while slowly expanding TAM "in the background" into new domain-specific use-cases (e.g. constructing novel math proofs in iterative cooperation with a prover) — until eventually, the sum of those added domain-specific capabilities will turn out to have all along doubled as a toolkit these companies were slowly building to "use models to analyze models" — allowing the AI bigcorps to apply models to the task of optimizing models down to something that run with positive-margin OpEx on whatever hardware that would be available at that time 5+ years down the line.
And then we'll see them turn to genuinely improving the model behavior for consumer use-cases again; because only at that point will they genuinely be making money by scaling consumer usage — rather than treating consumer usage purely as a marketing loss-leader paid for by the professional usage + ongoing capital investment that that consumer usage inspires.
Instead, I mean that these later-generation models will be able to be fine-tuned to do things like e.g. recognizing and discretizing "feature circuits" out of the larger model NN into algorithms, such that humans can then simplify these algorithms (representing the fuzzy / incomplete understanding a model learned of a regular digital-logic algorithm) into regular code; expose this code as primitives/intrinsics the inference kernel has access to (e.g. by having output vectors where every odd position represents a primitive operation to be applied before the next attention pass, and every even position represents a parameter for the preceding operation to take); cut out the original circuits recognized by the discretization model, substituting simple layer passthrough with calls to these operations; continue training from there, to collect new, higher-level circuits that use these operations; extract + burn in + reference those; and so on; and then, after some amount of this, go back and re-train the model from the beginning with all these gained operations already being available from the start, "for effect."
Note that human ingenuity is still required at several places in this loop; you can't make a model do this kind of recursive accelerator derivation to itself without any cross-checking, and still expect to get a good result out the other end. (You could, if you could take the accumulated intuition and experience of an ISA designer that guides them to pick the set of CISC instructions to actually increase FLOPS-per-watt rather than just "pushing food around on the plate" — but long explanations or arguments about ISA design, aren't the type of thing that makes it onto the public Internet; and even if they did, there just aren't enough ISAs that have ever been designed for a brute-force learner like an LLM to actually learn any lessons from such discussions. You'd need a type of agent that can make good inferences from far less training data — which is, for now, a human.)
Last week I put GPT-5 and Gemini 2.5 in a conversation with each other about a topic of GPT-5's choosing. What did it pick?
Improving LLMs.
The conversation was far over my head, but the two seemed to be readily able to get deep into the weeds on it.
I took it as a pretty strong signal that they have an extensive training set of transformer/LLM tech.
Model specialization. For example a model with legal knowledge based on [private] sources not used until now.
Or, as in the case of a leading North American LLM provider, I would love to be able to choose an older model but it chooses it for me instead.
Doesn’t explain Deepseek.
The problem is that in the meantime, they're going to nuke our existing powergrid, created in the 1920's to 1950's to serve our population as it was in the 1970's, and for the most part not expanded since. All of the delta is in price-mediated "demand reduction" of existing users.
Edit: for the curious, no. An H100 costs about ~25k and produces $1.2/day mining bitcoin. Without factoring in electricity.
Labs can just step up the way they track signs of prompts meant for model distillation. Distillation requires a fairly large number of prompt/response tuples, and I am quite certain that all of the main labs have the capability to detect and impede that type of use if they put their backs into it.
Distillation doesn't make the compute moat irrelevant. You can get good results from distillation, but (intuitively, maybe I'm wrong here because I haven't done evals on this myself) you can't beat the upstream model in performance. That means that most (albeit obviously not all) customers will simply gravitate toward the better performing model if the cost/token ratio is aligned for them.
Are there always going to be smaller labs? Sure, yes. Is the compute mote real, and does it matter? Absolutely.
....while degrading their service for paying customers.
This is the same problem as law-enforcement-agency forwarding threats and training LLMs to avoid user-harm -- it's great if it works as intended, but more often than not it throws a lot more prompt cancellations at actual users by mistake, refuses queries erroneously -- and just ruins user experience.
i'm not convinced any of the groups can avoid distillation without ruining customer experience.
And if LLMs don't keep getting qualitatively more capable every few months, that means that all this investment won't pay off and people will soon just use some open weights for everything.
And honestly I don't think a lot of these companies would turn a profit on pure utility -- the electric and water company doesn't advertise like these groups do; I think that probably means something.
> the electric and water company doesn't advertise like these groups do
I'm trying to understand what you mean here. In the US these utilities usually operate in a monopoly so there's no point in advertising. Cell service has plenty of advertising though.
Well who does the inference at the scale we're talking about here? That's (a key part of) the moat.
And for the newcomers, the scale needs to be bigger than what the incumbents (Google and Microsoft) have as discretionary spending - which is at least a few billion per year. Because at that rate, those companies can sustain it forever and would be default winners. So I think yearly expenditure is going to be 20B year++
Taxi apps are a commodity today.
Or the last investor. When this type of money is raised, you can be sure the earlier investors are looking for ways to have a soft landing.
Like sure it saves me a bit of time here and there but will scaling up really solve the reliability issues that is the real bottleneck.
For what it is worth, $13 billion is about the GDP of Somalia (about 150th in nomimal GDP) with a population of 15 million people.
The GDP of the Netherlands is about $1.2 trillion with a population of 18 million people.
I understand that that’s not quite what’s meant with ‘small country’ but in both population and size it doesn’t necessarily seem accurate.
California (where Anthropic is headquartered) has over twice as many people as all of Somalia.
The state of California has a GDP of $4.1 Trillion. $13 billion is a rounding error at that scale.
Even the San Francisco Bay Area alone has around half as many people as Somalia.
That’s the known minimum cost. We have a lot of room to get costs down if we can figure out how.
When you consider where most of that money ends up (Jensen &co), it's bizarre nobody can really challenge their monopoly - still.
You think any of these clusters large enough to be interesting, aren't authorized under a contractual obligation to run any/all submitted state military/intelligence workloads alongside their commercial workloads? And perhaps even to prioritize those state-submitted workloads, when tagged with flash priority, to the point of evicting their own workloads?
(This is, after all, the main reason that the US "Framework for Artificial Intelligence Diffusion" was created: America believed China would steal time on any private Chinese GPU cluster for Chinese military/intelligence purposes. Why would they believe that? Probably because it's what the US thought any reasonable actor would do, because it's what they were doing.)
These clusters might make private profits for private shareholders... but so do defense subcontractors.
Nice timing? I am sure they have scored a deal with the selling of personal data
I really have to wonder, how long will it be before the competition moves into who has the most wafer-scale engines. I mean, surely the GPU is a more inefficient packaging form factor than large dies with on-board HBM, with a massive single block cooler?
But I do believe that their cost per compute is still far more than disparate chips.
how so? deepseek and others do models on par with previous generation for a tiny fraction of a cost. Where is the moat?
That's just pure insanity to me.
It's not even Internet speed or hardware. It's literally not having enough electricity. What is going on with the world...
So we can at least assume that whoever is deciding to move the capacity does so at some business risk elsewhere.
> GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+?
Your brain? Basically free *(not counting time + food)
Disruption in this space will come from whomever can replicate analog neurons in a better way.
Maybe one day you'll be able to Matrix information directly into your brain and know kung-fu in an instant. Maybe we'll even have a Mentat social class.
Fifty years ago, we were starting to see the very beginning of workstations (not quite the personal computer of modern days), something like this: https://en.wikipedia.org/wiki/Xerox_Alto, which cost ~$100k in inflation-adjusted money.
Is there room for a smaller team to beat Anthropic/OpenAI/etc. at a single subject matter?
It still amazes me that Uber, a taxi company, is worth however many billions.
I guess for the bet to work out, it kinda needs to end in AGI for the costs to be worth it. LLMs are amazing but I'm not sure they justify the astronomical training capex, other than as a stepping stone.
> LLMs are amazing but I'm not sure they justify the astronomical training capex, other than as a stepping stone.
They can just... stop training today and quickly recuperate the costs because inference is mostly profitable.
How do you know models are expensive to run? They have gone down in price repeatedly in the last 2 years. Why do you assume it has to run in the cloud when open source models can perform well?
> The hype is insane, and so usage is being pushed by C-suite folks who have no idea whether it's actually benefiting someone "on the ground" and decisions around which AI to use are often being made on the basis of existing vendor relationships
There are hundreds of millions of chatgpt users weekly. They didn't need a C suite to push the usage.
Because cloud monetization was awful. It's either endless subscription pricing or ads (or both). Cloud is a terrible counter-example because it started many awful trends that strip consumer rights. For example "forever" plans that get yoinked when the vendor decides they don't like their old business model and want to charge more.
I think those actually using "AI" have a lot better idea of which are which than the C-suite folk.
Definitely not. That came years later but in the late 2000s to mid-2010s it was often engineers pushing for cloud services over the executives’ preferred in-house services because it turned a bunch of helpdesk tickets and weeks to months of delays into an AWS API call. Pretty soon CTOs were backing it because those teams shipped faster.
The consultants picked it up, yes, but they push a lot of things and usually it’s only the ones which actual users want which succeed.
The argument is something like that is not really possible anymore given the absurd upfront investments we're seeing existing AI companies need in order to further their offerings.
But yes, there was a window of opportunity when it was possible to do cutting-edge work without billions of investment. That window of opportunity is now past, at least for LLMs. Many new technologies follow a similar pattern.
What I always thought was exceptional is that it turns out it wasn't the incumbents who have the obvious advantage.
Take away the fact that everyone involved is already at the top 0.00001% echelon of the space (Sam Altman and everyone involved with the creation of OpenAI), but if you had asked me 10 years ago who will have the leg up creating advanced AI I would have said all the big companies hoarding data.
Turns out just having that data wasn't a starting requirement for the generation of models we have now.
A lot of the top players in the space are not the giant companies with unlimited resources.
Of course this isn't the web or web 2.0 era where to start something huge the starting capital was comparatively tiny, but it's interesting to see that the space allows for brand new companies to come out and be competitive against Google and Meta.
Wouldn't it be the same for the hardware companies? Not everyone could build CPUs as Intel/Motorola/IBM did, not everyone could build mainframes like IBM did, and not everyone could build smart phones like Apple or Samsung did. I'd assume it boils down the value of the LLMs instead of who has the moat. Of course, personally I really wish everyone can participate in the innovation like the internet era, like training and serving large models on a laptop. I guess that day will come, like PC over mainframes, but just not now.
The model leaders here are OpenAI and Anthropic, two new companies. In the programming space, the next leaders are Qwen and DeepSeek. The one incumbent is Google who trails all four for my workloads.
In the DevTools space, a new startup, Cursor, has muscled in on Microsoft's space.
This is all capital heavy, yes, because models are capital heavy to build. But the Innovator's Dilemma persists. Startups lead the way.
I'm curious to hear from experts how much this is true if interpreted literally. I definitely see that having hardware is a necessary condition. But is it also a sufficient condition these days? ... as in is there currently no measurable advantage to having in-house AI training and research expertise?
Not to say that OP meant it literally. It's just a good segue to a question I've been wondering about.
One more unimpressive release of ChatGPT or Claude, another 2 Billion spent by Zuckerberg on subpar AI offers, and the final realization by CNBC that all of AI right now...Is just code generators, will do it.
You will have ghost data centers in excess like you have ghost cities in China.
From Dario’s interview on Cheeky Pint: https://podcasts.apple.com/gb/podcast/cheeky-pint/id18210553...
I am very curious about the GAAP numbers here.
[1]: It was $3B at the end of May (so likely $250M in May alone), and $5B at end of july (so $400M that month).
and we continue to pretend that market generates any semblance of value.
Doing proper intrinsic valuation with technology firms is nigh-on impossible to do.
On a long enough timeframe, the open source models will catch up to the proprietary models and inference providers will beat these proprietary companies on price.
I got the impression that some people were reselling access and adding layers of fees to profit from the hype.
More importantly, we should ask who will be left holding the bag when this bubble bursts. For now, investors are getting their money back through acquisitions. Founders with desirable, traditional credentials are doing well, as are early employees at large AI startups who are cashing out on the secondary market. It appears the late-stage employees will be the ones who lose the most.
chat gpt 5 in codex is really good
so much that i stopped used claude code altogether
cheaper too
made me realize nobody has moat, coders especially will just go to whoever provides best bang for their buck.
But who knows what will be to best tool/model to use in October.
Are they putting Canadian public funds into Anthropic?
---
[1] https://www.crunchbase.com/organization/ontario-teachers-pen...
[2] https://www.otpp.com/en-ca/investments/our-investments/teach...
Although I admit that the government may be on the hook to replenish any spectacular failures in such a pension plan so in that way, it is somewhat fair -- though I doubt any one investment is weighted so heavily in any pension fund as to precipitate such an event.
With all these models converging, the big players aren’t demonstrating a real technical innovation moat. Everyone knows how to build these models now, it just takes a ton of cash to do it.
This whole thing is turning into an expensive race to the bottom. Cool tech, but bad business. A lot of VC folks gonna lose their shirt in this space.
It's going to rock the market like we've never seen before.
This ignores differential quality, efficiency, partnerships, and lots more.
I think of it a bit like the Windows vs. macOS comparison. Obviously there will be many players that will build their own scaffolding around open or API-based models. But there is still a significant benefit to a single company being able to build both the model itself as well as the scaffolding and offering it as a unit.
0 - https://x.com/thisritchie/status/1944038132665454841
1- https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...
They can afford to burn a good chunk of global wealth so that they can have even more global wealth.
Even at the current rates of insanity, the wealthy have spent a tiny fraction of their wealth on AI.
Bezos could put up this $13 billion himself and remain a top five richest man in the world.
(Remember Elon cost himself $40 billion because of a tweet and still was fine!)
This is a technology that could replace a sizable fraction of humamkind as a labor input.
I'm sure the rich can dig much deeper than this.
And if it does? What happens when a sizable fraction of humamkind is hungry and can't find work? It usually doesn't turn out so well for the rich.
But it's pretty obvious wealth can be created and destroyed. The creation of wealth comes from trade, which generally comes from a vibrant middle class which not only earns a fair bit but also spends it. Wars and revolutions are effective at destroying wealth and (sometimes) equitably redistributing what's left.
Both the modern left and modern right seem to have arrived at a consensus that trade frictions are a good way to generate (or at least preserve) wealth, while the history of economics indicates quite the contrary. This was recently best pilloried by a comic that showed a town under siege and the besieging army commenting that this was likely to make the city residents wealthy by encouraging self-reliance.
We need abundant education and broad prosperity for stability - even (and maybe especially) for the ultra wealthy. Most things we enjoy require absolute and not relative wealth. Would you rather be the richest person in a poor country or the poorest of the upper class in a developed economy?
people don't even remember the era before the current brands. like the time a bell offshoot almost crashed canada because they siphoned all the telephone money into bad routers.
Now he's in AI investments.
5 minutes into my first opus prompt on Claude Code on an empty repo, I've already been warned by Claude Code that I'm about to hit my opus limit despite not using it in 12 days.
Because Nvidia is making actual profit selling hardware to those who do, not hoping for a big payout sometime in the future. Different risk/reward model, different goals.
Intellectual engagement goes down, users get dumber and only look at quantity. China is taking first steps to continue its excellence. In the New York Post of all places:
https://nypost.com/2025/08/19/world-news/china-restricts-ai-...
"It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."
That applies to individuals, but it probably also applies to companies. We're in an AI boom? Raise some money while it's easy.
[1]https://www.reuters.com/technology/openai-tells-investor-not...
($100-plan, no agents, no mcp, one session at a time)
What a fantastic amount of money flying around though, to support my inane queries to Claude.
It's hard to escape the conclusion this is dumb money jumping on a bandwagon. To justify the expected returns here requires someone to make a transformer like leap again, and that doesn't take spending huge amounts in one place, but funding a lot more speculative thinkers.
Because of the legal uncertainty about what they were doing. There was no fundamental technological impediment.
Here the technology simply doesn't exist and this is a giant bet that it can be magically created by throwing (a lot) more money at the existing idea. This is why it's "dumb money" because they don't seem to understand the dynamics of what they're investing in.
I made a new top-level comment mentioning the 2006 YouTube acquisition only to show that many people were shocked, but -surprise- markets are usually better predictors than individual hunches.
It is very far from a situation where the price discovery mechanism is allowed to work.
1. How much an organization is willing to invest in X competes against other market opportunities.
2. The effective price per share (as part of the latest round of financing) is an implicit negotiation.
It is a matter of degree, sure, but my point still stands: there is a lot of collective information going into this valuation. So an individual should be intellectually humble relative to that. How many people have more information than even an imperfect market-derived quantity?
No, there isn't. For example, I would like to legally bet against Anthropic existing as a going concern in five years. Where can I do this? All the information against them is discarded and hidden.
It'll take a solid year and about 30k.
Any chance of even talking to a VC as an outsider?
I expect the next breakthroughs to be all about efficiency. Granted, that could be tomorrow, or in 5 years, and the AI companies have to stay all at in the meantime.
If there's a step-function breakthrough in efficiency, it's far more likely to be on the model side than on the semiconductor side. Even then, investing in the model companies only makes sense if you think one of them is going to be able to keep that innovation within their walls. Otherwise, you run into the same moat-draining problem.
That’s just about the most tangible benefit I see this AI breakthrough delivering. What an asset to have too, socially and civilly, especially when compared to the west’s primary adversary: the CCCP and its communist message of ‘equality’ for the people when they’re still working six days a week!
- Buy an old warehouse and a bunch of GPUs
- Hire your local tech dude to set up the machines and install some open-source LLMs
- Connect your machines to a routing service that matches customers who want LLM inference with providers
If the service goes down for a day, the owner just loses a day's worth of income, nobody else cares (it's not like customers are going to be screaming at you to find their data). This kind of passive, turn-key business is a dream for many investors. Comparable passive investments like car washes, real estate, laundromats, self-storage, etc are messier.
I think once the sheen of Microsoft Copilot and the like wear off and people realise LLMs are really good at creating deterministic tools but not very good at being one, not only will the volume of LLM usage decline, but the urgency will too.
Narrow point: In general, one person’s impression of what is crazy does not fare well against market-generated information.
Broader point: If you think you know more than the market, all other things equal, you’re probably wrong.
Lesson: Only searching for reasons why you are right is a fishing expedition.
If the investment levels are irrational, to what degree are they? How and why? How will it play out specifically? Predicting these accurately is hard.
Google also bought Motorola for 12 billion and Microsoft bought Nokia for 7 billion. Those weren't success cases.
Or more similarly, WeWork got 12B from investor and isn't doing well (hell, bankrupt, according to Wikipedia).
A lot of that was patent acquisition rather than trying to run those businesses so it's hard to say a success or not.
However, I remembered when Youtube was young. It was burning money every month on bandwidth.
After selling out to Google, it took another decade to turned profit. But it did. And it achieved its end game. As the winner, it took all of the video hosting market. And Google reaped the entirety of that win.
This AI race is playing out the same way. The winner has the ability to disrupt several FAANGs and FAANG neighbors (eg. Adobe). And that’s 1-2 trillion dollar market, combined.
I think one key question is can Anthropic replicate this on some other segment. Like with people working with financials.
Whatever it is, the signal it's sending of Anthropic insiders is negative for AI investors.
Other comments having read a few hundred comments here:
- there is so much confusion, uncertainty, and fanciful thinking that it reminds me of the other bubbles that existed when people had to stretch their imaginations to justify valuations
- there is increasing spend on training models, and decreasing improvements in new models. This does not bode well
- wealth is an extremely difficult thing to define. It's defined vaguely through things like cooperation and trade. Ultimately these llms actually do need to create "wealth" to justify the massive investments made. If they don't do this fast this house of cards is going to fall, fast.
- having worked in finance and spoken to finance types for a long time: they are not geniuses. They are far from it. Most people went into finance because of an interest in money. Just because these people have $13bn of other people's money at their disposal doesn't mean they are any smarter than people orders of magnitude poorer. Don't assume they know what they are doing.
It's at least possible that the investment pays off. These investors almost certainly aren't insane or stupid.
We may still be in a bubble, but before you declare money doesn't mean anything any more and start buying put options I'd probably look for more compelling evidence than this.
I'm sure this exact sentence was said before every bubble burst.
I would assume the majority of investors in AI are playing a game of estimating how much more these AI valuations can run before crashing, and whether that crash will matter in the long-run if the growth of these companies lives up to their estimates.
I think that's one possible interpretation but another is that these funds choose to allocate a controlled portion of their capital toward high risk investments with the expectation that many will fail but some will pay off. It's far from clear that they are crazy or stupid.
> Headline: OpenAI raises 400 Trillion, proclaims dominion over the delta quadrant
> Top comment: This just proves that it's a bubble. No AI company has been profitable, we're in the era of diminishing returns. I don't know one real use case for AI
It's hilarious how routinely bearish this site is about AI. I guess it makes sense given how much AI devalues siloed tech expertise.
duxup•6h ago
chpatrick•6h ago
Rebuff5007•6h ago
isoprophlex•6h ago
saberience•6h ago
aroman•5h ago
saberience•5h ago
AlienRobot•5h ago
Step 2: achieve AGI.
Step 3: ?
Step 4: transcend money.
perks_12•6h ago
We're in a VC bubble; any project that mentions AI gets tons of money.
seneca•6h ago
koakuma-chan•5h ago
edm0nd•5h ago
also if your founder has to use dozens of buzzwords when asked to describe what their app does and that still doesn't even explain it, its obviously just bs.
"Arcarae’s mission is to help humanity remember and unlock the power each individual holds within themself so they can bring into reality their unique, authentic expression of self without fear or compromise.
Our research endeavors are designed to support this mission via computationally modeling higher-order cognition and subjective internal world models."
lol
koakuma-chan•4h ago
What do you mean lol? Isn't that awesome? Feel free to share if you think that isn't awesome. I personally don't think there is enough information here to tell if that is awesome or satire, but it is interesting how usually things like this are considered awesome, but this particular one is deemed satire.
beAbU•3h ago
What does the product do?
koakuma-chan•3h ago
I think this is like ChatGPT, but it generates "inner monologue" in the background, and the "inner monologue" is then added to the context, and this "addresses" "sycophancy, attention deficits, and inconsistent prioritization"
aaronblohowiak•6h ago
potatoproduct•6h ago
StopDisinfo910•6h ago
Unreasonable doesn’t even start to capture it. Anthropic being worth 10% of Alphabet is beyond insane.
charcircuit•5h ago
StopDisinfo910•5h ago
y0eswddl•5h ago
YetAnotherNick•5h ago
So 10% of valuation for 1.5% of revenue, which grew 5x in last 6 months. Doesn't seem as unrealistic as you put it, if it has good gross margin which some expects to be 60%.
Also Google was valued at $350B when it had $5B revenue.[1]
[1]: https://companiesmarketcap.com/alphabet-google/marketcap/
tdullien•5h ago
nathan_douglas•5h ago
nostrademons•5h ago
Investors are forward looking, and market conditions can change abruptly. If Anthropic actually displaces Google, it's amazingly cheap at 10% of Alphabet's market cap. (Ironically, I even knew that NVidia was displacing Intel at the time I invested, but figured that the magnitude of the transition couldn't possibly be worth the price differential. News flash: companies can go to zero, and be completely replaced by others, and when that happens their market caps just swap.)
Printerisreal•5h ago
Anthropic have several similiar competitors with actual real distribution and tech. Ones that can go 10x are underdogs like Google before IPO or Amazon, or Shopify etc. Anthropic current stock is beyond that. Investors no longer give any big opp. to public. They gain it via private funding
wongarsu•5h ago
Right now nobody wants to be the first to offer advertising in LLM services, but LLM conversation history provides a wealth of data for ad targeting. And in more permissive jurisdictions you can have the LLM deliver ads organically in the conversation or just shift the opinions and biases of the model through a short mention in the system message
StopDisinfo910•5h ago
As I said, insane. And that’s not even considering the 10 to 15% shares of Anthropic actually owned by Alphabet.
matheist•5h ago
You may not agree with the market's estimation of that, but comparing just present revenue isn't really the right comparison.
csomar•4h ago
aripickar•3h ago
Basically, 5x-ing revenue in 8 months off of a billion dollars starting revenue is insane. Growing this quickly at this scale breaks every traditional valuation metric.
(And no - this doesn't include margins or COGS).
jpalomaki•2h ago
Somebody above said that Anthropic might reach $9 billion ARR by the end of this year.
lifty•1h ago
dgrcode•1h ago
How much was google revenue in 2003? It was 1.5 billions (2.6 in today's USD)
Not saying the price is justified, but the comparison is not very fair.
seneca•6h ago
I could see two or three percent, but this seems like a pretty big stretch. Then again, I'm not a VC.
Zigurd•5h ago
xp84•2h ago
jjmarr•6m ago
Machine ice became competitive in India and Australia in the 1850s, but it took until the start of World War 1 (1914) for artificial ice production to surpass natural in America. And the industry only disappeared when every household could buy a refrigerator.
Self-driving doesn't have to scale globally to be economically viable as a technology. It could already be viable at $400k in HCOL areas with perfect weather (i.e. California, Austin, and other places they operate).
Zigurd•6m ago
datadrivenangel•5h ago
If AI is winner take all, then the value is effectively infinite. Obviously insane, but maybe it's winner take most?
throw310822•5h ago
SirMaster•4h ago
Convincing billions of users to make a new account and do all their e-mail on a new domain? A new YouTube channel with all new subscribers? Migrate all their google drive and AdSense accounts to another company, etc?
This is trivially simple and creates no moat?
xp84•2h ago
I know you aren't asserting this but rather just putting the argument out there, but to me at least it's interesting comparing a company that has vendor lock-in and monopoly or duopoly status in various markets vs one that doesn't.
I'd argue that Google's products themselves haven't been their moat for decades -- their moat is "default search engine status" in the tiny number of Browsers That Matter (Arguably just Chrome and Mobile Safari), being entrenched as the main display ad network, duopoly status as an OS vendor (Android), and monopoly status on OS vendor for low-end education laptops (ChromeOS). If somehow those were all suddenly eliminated, I think Google would be orders of magnitude less valuable.
paulpauper•6h ago
duxup•6h ago
lm28469•5h ago
paulpauper•5h ago
lm28469•5h ago
Zigurd•5h ago
anthem2025•5h ago
When their sales have nosedived, new products have flopped, their CEO is the most disliked man in America, and their self driving still requires someone in the car at all times?
Tesla is a GameStop level meme stock.
tinyhouse•6h ago
From a technical perspective, they manage to attract top talent - Google / OpenAI lose a lot of good people to Anthropic. This is important since there are few people who can transform a business (e.g., the guy who built Claude Code). Being attractive for top talent means you're more likely stumble upon them.
miltonlost•6h ago
tinyhouse•5h ago
uncircle•2h ago
aqme28•6h ago
Edit: After looking it up, normal P/Sales ratios are on the order of about 1. They vary from like .2 to 8 depending on industry.
tinyhouse•5h ago
utyop22•21m ago
Its not internally consistent, at all.
FergusArgyll•57m ago
I do think this is important. Many of the best researchers are also religious AGIists and Anthropic is the most welcoming to them. This is a field where the competence of researchers really matters.