That's not quite the level of disagreement I was expecting given the title.
> I’m not against people making shoddy toy models, and I think they can be a useful intellectual exercise. I’m not against people sketching out hypothetical sci-fi short stories, I’ve done that myself. I am against people treating shoddy toy models as rigorous research, stapling them to hypothetical short stories, and then taking them out on podcast circuits to go viral. What I’m most against is people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI2027. This is just a model for a tiny slice of the possibility space for how AI will go, and in my opinion it is implemented poorly even if you agree with the author's general worldview.
In particular, I wouldn't describe the author's position as "probably not longer than 2032" (give or take the usual quibbles over what tasks are a necessary part of "superhuman intelligence"). Indeed, he rates social issues from AI as a more plausible near-term threat than dangerous AGI takeoff [0], and he is very skeptical about how well any software-based AI can revolutionize the physical sciences [1].
[0] https://titotal.substack.com/p/slopworld-2035-the-dangers-of...
[1] https://titotal.substack.com/p/ai-is-not-taking-over-materia...
it's like asking between the difference between amateur toy audio gear, and real pro level audio gear... (which is not a simple thing given "prosumer products" dominate the landscape)
the only point in betting when "real AGI" will happen boils down to the payouts from gambling with this. are such gambles a zero sum game? does that depend on who escrows the bet??
what do I get if I am correct? how should the incorrect lose?
Most of these models predict superhuman coders in the near term, within the next ten years. This is because most of them share the assumption that a) current trends will continue for the foreseeable future, b) that “superhuman coding” is possible to achieve in the near future, and c) that the METR time horizons are a reasonable metric for AI progress. I don’t agree with all these assumptions, but I understand why people that do think superhuman coders are coming soon.
Personally I think any model that puts zero weight on the idea that there could be some big stumbling blocks ahead, or even a possible plateau, is not a good model.
Pre-CharGPT I very much doubt the bullish predictions on AI would've been made the way they are now.
A human can do a long sequence of easy tasks without error - or easily correct. Can a model do the same?
Of course, they gave it a terrible clickbait title and framed the question and graphs incorrectly. But if they did the study better it would have been "How long of a sequence of algorithmic steps can LLMs execute before making a mistake or giving up?"
Making predictions that are too specific just opens you up to pushback from people who are more interested in critiquing the exact details of your softer predictions (such as those around timelines) rather than your hard predictions about likely outcomes. And while I think articles like this are valuable to refine timeline predictions, I find a lot of people use them as evidence to dismiss the stronger predictions made about the risks of ASI.
I think people like Nick Bostrom make much more convincing arguments about AI risk because they don't depend on overly detailed predictions which can be easily nit-picked at, but are instead much more general and focus more on the unique nature of the risks AI presents.
For me the risk of timelines is that they're unknowable due to the unpredictable nature of ASI. The fact we are rapidly developing a technology which most people would accept comes with at least some existential risk, that we can't predict the progress curve of, and where solutions would come with significant coordination problems should concern people without having to say it will happen in x number of years.
I think AI 2027 is interesting as a science fiction about potential futures we could be heading towards, but that's really it.
The problem with being an AI doomer is that you can't say "I told you so" if you're right so any personal predictions you make have no close to no expected pay-out, either socially or economically. This is different to other risks which if you predict accurately when others don't you can still benefit from.
I have no meaningful voice in this space so I'll just keep saying we're fucked because what does it matter what I think, but I wish there were more people with influence out there who were seriously thinking about how they can best influence rather than stroking their own own egos with future predictions, which even if I happen agree with do next to nothing to improve the distribution of outcomes.
(Im sorry, I know its a crass question)
I'm not sure if the author did anyone a favor with this write-up. More than anything, it buries the main point ("this kind of forecasting is fundamentally bullshit") under a bunch of complicated-sounding details that lend credibility to the original predictions, which the original authors now get to agrue about and thank people for pointing out "minor issues which we have now addressed in the updated version".
brcmthrowaway•2h ago
goatlover•2h ago
evilantnie•2h ago
The back-and-forth over σ²’s and growth exponents feels like theatrics that bury the actual debate.
vonneumannstan•1h ago
Truly a bizarre take. I'm sure the Dinosaurs also debated the possible smell and taste of the asteroid that was about to hit them. The real debate. lol.
ysofunny•2h ago