Hasn't the performance been asymptotic?
Before computers came along, we really couldn't fit curves to data much beyond simple linear regression. Too much raw number crunching to make the task practical. Now that we have computers—powerful ones—we've developed ever more advanced statistical inference techniques and the payoff in terms of what that enables in research and development is potentially immense.
Why? What does that tell you?
If they did, the articles would look less like “wow, numbers are really big,” and more like, “disclaimer: I am short. Here’s my reasoning”
They don’t even have to be short for me to respect it. Even being hedged or on the sidelines I would understand if you thought everything was massively overvalued.
It’s a bit like saying you think the rapture is coming, but you’re still investing in your 401k…
Edit: sorry to respond to this comment twice. You just touched on a real pet peeve of mine, and I feel a little like I’m the only one who thinks this way, so I got excited to see your comment
Heck, just look at yesterday: Myself and several million other people wouldn't have needed to march if smart people reliably ended up in charge.
I think it's more valuable to flip the lens around, and ask: "If you're so rich, why aren't you smart?"
tl;dr - it is really tiring, reading these "clever" quips about "why won't you short then?", mainly because they are neither clever nor in any way new. We have heard that for a decade about "why won't you short BTC them?". You are not original.
Reminds me of the "everybody knows Tether doesn't have the dollars it claims and it's collapse is imminent" that was parroted here for years.
This seems to be the disconnect.
It was a good lesson for me personally, to always check wider picture and consider unknown factors.
One of the reasons is the way they were built. The original large language model AI was built using vectors to try and understand the statistical likelihood that words follow each other in the sentence. And while they’re very clever, and it’s a very good bit of engineering required to do it, they’re also very limited.
The second thing is the way LLMs were applied to coding. What they’ve learned from — the coding that’s out there, both in and outside the public domain — means that they’re effectively showing you rote learned pieces of code. That’s, again, going to be limited if you want to start developing new applications.”
Frankly kind of amazing to be so wrong right out of the gate. LLMs do not predict the most likely next token. Base models do that, but the RLed chat models we actually use do not — RL optimizes for reward and the unit of being rewarded is larger than a single token. On the second point, approximately all commercial software consists of a big pile of chunks of code that are themselves rote and uninteresting on their own.
They may well end up at the right conclusion, but if you start out with false premises as the pillars of your analysis, the path that leads you to the right place can only be accidental.
Market Analyst, perhaps?
And there are legitimately applications beyond search, I don't know how big those markets are, but it doesn't seem that odd to suggest they might be larger than the search market.
They compete legitimately with Google Search as I compete legitimately with Jay-Z over Beyonce :)
billy99k•1h ago
Terr_•1h ago
There's always some "value" in a bubble, but how does one confirm that it's enough "direct value" that the investments are proportionate?
Enormous investments should go with enormous benefits, and by now a very-measurable portion of expected benefit should have arrived...
belter•1h ago
Name one
johnfn•1h ago
raw_anon_1111•1h ago
bdangubic•38m ago
raw_anon_1111•33m ago
But still that is the ultimate survivorship bias. Is each new customer that Cursor has bringing in more money than they cost Cursor?
bdangubic•1m ago
tibbydudeza•1h ago
beberlei•1h ago