Now, success in a tough math exam isn't "automating all human labor" but it is certainly a benchmark many thought AI would not achieve easily. Even so, many are claiming it isn't really a big deal, and that humans will still be far smarter than AI's for the foreseeable future.
My question is, if you are in the aforementioned camp, what would it take you to adopt a frame of mind roughly analogous to "It is realistic that AI systems will become smarter than humans, and could automate all human labor and cognitive outputs within a single-digit number of years".
Would it require seeing a humanoid robot perform some difficult task? (the Metaculus definition of AGI requires that a robot be able to satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.). Would it involve a Turing test of sufficient rigor? I'm curious what people's personal definition of "ok this is really real" is.
neximo64•3h ago
atleastoptimal•3h ago