The following paragraph is almost complete gibberish:
"For AI experts, Karpathy's view is a better counterargument to short timelines than ours. But for non-AI-experts, we think the practical considerations we raised are worth reflecting on with 6 more months of evidence. As forecasters, this is more of an "outside view" - regardless of how exactly AI improves, what problems might slow down an R&D-based takeoff scenario?"
Why would Karpathy's view be different for AI and non-AI-experts?
Did they use AI to write the article?
I realize now that it was presumptuous to assume people had done both of these things.
> Why would Karpathy's view be different for AI and non-AI-experts?
For people who understand AI, they can engage with the substance of his claims, about reinforcement learning, continuous learning, and his points about the 9s of reliability.
For people who don't, the article suggests thinking about AI as some black-box technology, and asking questions about base rates: how long does adoption normally take? What do the companies developing the technology normally do?
> It does not even give a statement about the reasoning behind why Karpathy said getting to https://ai-2027.com is unlikely.
That's the substance of the podcast, Karpathy justifies his views fairly well and at length.
> It also does not clearly define what AI 2027 is?
Dwarkesh covered AI 2027 when it came out, but for those who don't know, it's a deeply researched case of runaway AI that effectively destroys humanity in just 2-3 years after publication. This is what I mean by "short timelines".
The entire absence of guilt in these "academics" is notable. They are complete psychopaths.
However, to be fair, since that podcast, he has spent insane money on ML researchers...
Why are these relevant? Engineer, because we are talking about a set of technologies that are engineering projects. There is no substitute for hands-on experience in systems. And likely an engineer has taken at least one course that included some history of AI to give a sense of the time scales involved in getting from the perceptron to Sonnet 4.5.
Over 30 primarily because that's roughly old enough to have seen at least one tech hype cycle through which to filter the AI hype cycle. (Some people are old enough to remember the predictions that nobody would use screens in 2025, everything would be a voice interface. Or how economics had fundamentally changed and companies didn't need to make money in the New Economy. Or how Tesla would for sure have 1 million robot axis on the road in 2020. Etc.)
IMHO it's a bearish sign that boosters are not looking to experienced engineers for this kind of analysis.
https://www.pantone.com/articles/fashion-color-trend-report/...
Unless I’m confusing Slate Star Codex and this is a different S.A.
Coding is the least problem, and I'd guess today's Claude Code, etc is well capable of doing the drudgework.
belter•7h ago