Firstly, what is AGI? I've never heard a decent definition. Some say it's an AI that is as smart or as general as humans; some say it's an AI that's conscious. I don't see how it could be a "step" by these definitions because nothing actually changes between a regular AI and an AGI. To make AI more general, you just make the input tokenization more granular, change the training method, and perhaps add some kind of iterative framework on top to make it do more stuff. Also, AI is already more capable than humans in almost every way except scale.
I personally think that it's just a hype word that Altman spammed to get more funding and interest in OpenAI. Even if you snapped your fingers and had the right training mechanism and the right networks etc for an AGI/ASI, I get the feeling it wouldn't even be smarter than people in the technical sense. AI already blows most people out of an IQ test at a fraction of the computational power of a brain, but that's because IQ tests compare competence to get relative intelligence; they don't test computation.
With that assumption, if AI can't be computationally stronger than humans, it's safe to say we won't have conscious computers for a while, but instead computers that act consciously instead. Does that mean that AI from hear on out is a waste that does nothing but take control from people while benefiting us the same? What is ChatGPT going to look like in 5 years? Am I just going to type in "do my taxes," and it's just going to do whatever it wants on my pc until my taxes are done? Why would I ever want that over a system designed to do my taxes correctly EVERY TIME by accountants? One thing I know about AI is it is slow as a mf, AI is great but you really have to think, we really just built a giant dictionary guy who's going to have the same problems human employees have.
I don't know just kind of spewing thoughts, I'd love to hear from people who are actual experts in designing these things.
brandozer111•6h ago