The most interesting part of this is section III — the idea that LLMs don't need to be AGI candidates themselves to accelerate the path to AGI. They just need to speed up the researchers working on the correct branch of the tech tree.
This seems underappreciated. Most of the "LLMs are a dead end" vs "LLMs will become AGI" debate treats these as exhaustive options, but there's a third path: LLMs as a force multiplier for whatever paradigm actually works. A neurosymbolic researcher who ships 3x faster because of LLM coding assistants is a very different scenario from LLMs becoming AGI on their own.
The jaggedness question (question 3) is the one I find hardest to dismiss. The gap between what AIs find easy and what humans find easy has been narrowing, but it's still wide in ways that matter. Whether that gap closes or persists seems like the crux of the whole debate.
kingstnap•17m ago
We already have/had recursive self improvement in technology. Its just mutually recursive self improvement.
And it will continue to be mutually self recursive because its not like the only input to AI is AI. Its a lot of things.
MorkMindy74•1h ago