Without a moat defined by massive user bases, computing resources, or data, any breakthrough your researchers achieve quickly becomes fair game for replication. May be there will be new class of products, may be there is a big lock-in these companies can come up with. No one really knows!
Yes corporations need those numbers, but those few humans are way more valuable than any numbers out there.
Of course, only when others believe that they are in the frontier too.
Secrecy is also possible, and I'm sure there's a whole lot of that.
Do you think OpenAI could project their revenue in 2022, before ChatGPT came out?
I just hope the people funding his company are aware that they gave some grant money to some researchers.
https://www.reuters.com/technology/artificial-intelligence/o...
I agree these AI startups are extremely unlikely to achieve meaningful returns for their investors. However, based on recent valley history, it's likely high-profile 'hot startup' founders who are this well-known will do very well financially regardless - and that enables them to not lose sleep over whether their startup becomes a unicorn or not.
They are almost certainly already multi-millionaires (not counting ill-liquid startup equity) just from private placements, signing bonuses and banking very high salaries+bonus for several years. They may not emerge from the wreckage with hundreds of millions in personal net worth but the chances are very good they'll probably be well into the tens of millions.
1. Most AI ventures will fail
2. The ones that succeed will be incredibly large. Larger than anything we've seen before
3. No investor wants to be the schmuck who didn't bet on the winners, so they bet on everything.
Your assumption is questionable. This is the biggest FOMO party in history.
Best case scenario you win. Worst case scenario you’re no worse off than anyone else.
From that perspective I think it makes sense.
The issue is that investment is still chasing the oversized returns of the startup economy during ZIRP, all while the real world is coasting off what’s been built already.
There will be one day where all the real stuff starts crumbling at which point it will become rational to invest in real-world things again instead of speculation.
(writing this while playing at the roulette in a casino. Best case I get the entertainment value of winning and some money on the side, worst case my initial bet wouldn’t make a difference in my life at all. Investors are the same, but they’re playing with billions instead of hundreds)
This is also Rogan's chief problem as a podcaster, isn't it?
Somehow, despite being vastly overpaid I think AI researchers will turn out to be deeply inadequate for the task. As they have been during the last few AI winters.
I once said that to Rod Brooks, when he was giving a talk at Stanford, back when he had insect-level robots and was working on Cog, a talking head. I asked why the next step was to reach for human-level AI, not mouse-level AI. Insect to human seemed too big a jump. He said "Because I don't want to go down in history as the creator of the world's greatest robot mouse".
He did go down in history as the creator of the robot vacuum cleaner, the Roomba.
Research now matters more than scaling when research can fix limitations that scaling alone can't. I'd also argue that we're in the age of product where the integration of product and models play a major role in what they can do combined.
Not necessarily. The problem is that we can't precisely define intelligence (or, at least, haven't so far), and we certainly can't (yet?) measure it directly. And so what we have are certain tests whose scores, we believe, are correlated with that vague thing we call intelligence in humans. Except these test scores can correlate with intelligence (whatever it is) in humans and at the same time correlate with something that's not intelligence in machines. So a high score may well imply high intellignce in humans but not in machines (e.g. perhaps because machine models may overfit more than a human brain does, and so an intelligence test designed for humans doesn't necessarily measure the same thing we think of when we say "intelligence" when applied to a machine).
This is like the following situation: Imagine we have some type of signal, and the only process we know produces that type of signal is process A. Process A always produces signals that contain a maximal frequency of X Hz. We devise a test for classifying signals of that type that is based on sampling them at a frequency of 2X Hz. Then we discover some process B that produces a similar type of signal, and we apply the same test to classify its signals in a similar way. Only, process B can produce signals containing a maximal frequency of 10X Hz and so our test is not suitable for classifying the signals produced by process B (we'll need a different test that samples at 20X Hz).
Models also struggle at not fabricating references or entire branches of science.
edit: "needing phd level research ability [to create]"?
Models aren't intelligent, the intelligence is latent in the text (etc) that the model ingests. There is no concrete definition of intelligence, only that humans have it (in varying degrees).
The best you can really state is that a model extracts/reveals/harnesses more intelligence from its training data.
Note that if this is true (and it is!) all the other statements about intelligence and where it is and isn’t found in the post (and elsewhere) are meaningless.
He’s wrong we still scaling, boys.
> Maybe here’s another way to put it. Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling—maybe plus or minus, let’s add error bars to those years—because people say, “This is amazing. You’ve got to scale more. Keep scaling.” The one word: scaling.
> But now the scale is so big. Is the belief really, “Oh, it’s so big, but if you had 100x more, everything would be so different?” It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.
^
/_\
***
If these agents moved towards a policy where $$$ were charged for project completion + lower ongoing code maintenance cost, moving large projects forward, _somewhat_ similar to how IT consultants charge, this would be a much better world.
Right now we have chaos monkey called AI and the poor human is doing all the cleanup. Not to mention an effing manager telling me you now "have" AI push 50 Features instead of 5 in this cycle.
Would it?
We’d close one of the few remaining social elevators, displace higher educated people by the millions and accumulate even more wealth at the top of the chain.
If LLMs manage similar results to engineers and everyone gets free unlimited engineering, we’re in for the mother of all crashes.
On the other hand, if LLMs don’t succeed we’re in for a bubble bust.
Wow. No. Like so many other crazy things that are happening right now, unless you're inside the requisite reality distortion field, I assure you it does not feel normal. It feels like being stuck on Calvin's toboggan, headed for the cliff.
Oriol Vinyals VP of Gemini research
https://x.com/OriolVinyalsML/status/1990854455802343680?t=oC...
My guess is we'll discover that biological intelligence is 'learning' not just from your experience, but that of thousands of ancestors.
There are a few weak pointers in that direction. Eg. A father who experiences a specific fear can pass that fear to grandchildren through sperm alone. [1].
I believe this is at least part of the reason humans appear to perform so well with so little training data compared to machines.
The whole mess surrounding Grok's ridiculous overestimation of Elon's abilities in comparison to other world stars, did not so much show Grok's sycophancy or bias towards Elon, as it showed that Grok fundamentally cannot compare (generalize) or has a deeper understanding of what the generated text is about. Calling for more research and less scaling is essentially saying; we don't know where to go from here. Seems reasonable.
Today on X, people are having fun baiting Grok into saying that Elon Musk is the world’s best drinker of human piss.
If you hired a paid PR sycophant human, even of moderate intelligence, it would know not to generalize from “say nice things about Elon” to “say he’s the best at drinking piss”.
andy_ppp•1h ago
jsheard•1h ago
gessha•53m ago
aunty_helen•30m ago
If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space.
shwaj•55m ago
If the former, no. If the latter, sure, approximately.
Animats•36m ago
The business question is, what if AI works about as well as it does now for the next decade or so? No worse, maybe a little better in spots. What does the industry look like? NVidia and TSMC are telling us that price/performance isn't improving through at least 2030. Hardware is not going to save us in the near term. Major improvement has to come from better approaches.
Sutskever: "I think stalling out will look like…it will all look very similar among all the different companies. It could be something like this. I’m not sure because I think even with stalling out, I think these companies could make a stupendous revenue. Maybe not profits because they will need to work hard to differentiate each other from themselves, but revenue definitely."
Somebody didn't get the memo that the age of free money at zero interest rates is over.
The "age of research" thing reminds me too much of mid-1980s AI at Stanford, when everybody was stuck, but they weren't willing to admit it. They were hoping, against hope, that someone would come up with a breakthrough that would make it work before the house of cards fell apart.
Except this time everything costs many orders of magnitude more to research. It's not like Sutskever is proposing that everybody should go back to academia and quietly try to come up with a new idea to get things un-stuck. They want to spend SSI's market cap of $32 billion on some vague ideas involving "generalization". Timescale? "5 to 20 years".
This is a strange way to do corporate R&D when you're kind of stuck. Lots of little and medium sized projects seem more promising, along the lines of Google X. The discussion here seems to lean in the direction of one big bet.
You have to admire them for thinking big. And even if the whole thing goes bust, they probably get to keep the house and the really nice microphone holder.
energy123•12m ago
Quothling•36m ago
I think the title is an interesting thing, because the scaling isn't about compute. At least as I understand it, what they're running out of is data, and one of the ways they deal with this, or may deal with this, is to have LLM's running concurrently and in competition. So you'll have thousands of models competing against eachother to solve challenges through different approaches. Which to me would suggest that the need for hardware scaling isn't about to stop.
imiric•22m ago