This is highlighted in statements like this one:
> For AI to truly transcend human intelligence, it would need to learn from something more intelligent than humans.
Just imagine a human with a brain the size of a large watermelon. If the brain is like a computer (let's assume functional computationalism), then larger brain size means more computation. This giant brain human would have an IQ of 300+ and could singlehandedly usher in a new age in human history... THIS is the analog of what AGI is supposed to be (except a lot more because we can have multiple copies of the same genius).
Circling back to the article, this means that an AGI by definition would have the capacity to surpass human intelligent just like a genius human would, given that the AGI is processing information the way human minds process information. It wouldn't just synthesize data like current LLMs, it would actually be a creative genius and discover new things. This isn't to say LLMs won't be creative or discover new things, but the way in which they get there is completely different and more akin to a narrow AI for pattern matching rather than a biological brain which we know for sure has the right kind of creativity to discover and create.
PaulHoule•4h ago
If you have a framework stacked up to do it and you are just connecting to it maybe, but I’d expect it to take more than 50 lines in most cases, and if somebody tried to vibe code it I’d expect the result to be somewhere between “it just doesn’t work” to “here’s a ticket where you can log in without a username and password”
shayonj•4h ago
PaulHoule•3h ago
At one point, playing chess was considered to be intelligent, but early in the computer age it was realized that alpha-beta search over 10 million positions or so would beat most people. Deep Blue (and later Stockfish) later tuned up and scaled up that strategy to be superhuman.
Once one task falls, people move the goalposts.
There are some things that people aren't good at at all, like figuring out how proteins fold. When I was in grad school in the 1990s there were intense effort to attract bright graduates students to a research program that, roughly, assumed that "proteins fold themselves" to the minimum energy configuration in water. Those assumptions turned out to be wrong. Metastable states are important [1] and proteins don't just fold, they get folded. At the time it was thought the problem was tough because the search space was beyond astronomical and it remains beyond astronomical. Little progress was made.
The best method we've got yet to interpret a protein sequence is to compare it to other protein sequences and I think Alphafold is basically doing that with transformer magic as opposed to suffix tree magic.
Godlike intelligence might not be all it is cracked up to be. Theology has long studied with questions like "if God is so almighty how did he create screw-ups like us?" No matter how smart you are you aren't going to be able to predict the weather much longer than we can now because of the mathematics of chaos. The problems Kurt Godel talks about, such as the undecidability of first-order logic plus arithmetic [3] are characteristic of the problem, not the way we go about solving them.
[1] https://en.wikipedia.org/wiki/Prion
[2] https://en.wikipedia.org/wiki/Chaperone_(protein)
[3] A real shame because if we want to automate engineering, or software development, or financial regulation FOL + arithmetic is the natural representation language