That's what I'm waiting for.
(He didn't specify when or how the money will get here, but I'm betting that I'll get my fair share.)
You cheque will be in the post shortly.
So long as half of people are employed or in business, these people will insist that it's not AGI yet.
Until AI can fully replace you in your job, it's going to continue to feel like a tool.
Given a useful-enough general purpose body (with multiple appendage options), one of the most significant applications of whatever we end up calling AGI should be finally seeing most of our household chores properly roboticized.
When I can actually give plain language descriptions of 'simple' manual tasks around the house to a machine the same way I would to, say, a human 4th grader, and not have to spend more time helping it get through the task than it would take me to do it myself, that is when I will feel we have turned the corner.
I still am not at all convinced I will see this within the next few decades I probably have left.
Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.
Edit: ok you guys, I take the point and have put the original title back. More at https://news.ycombinator.com/item?id=45430354.
I admit though the in this case “What is AGI?” better matches expectation to reality. Before I noticed the domain, “What the f*ck is AGI?” would have led me to expect more of a technical blog post with a playful presentation rather than the review article it actually is.
You pose an excellent point... I tend to agree.
I've kept "f*ck" in the title since that's in the original and arguably adds some subtlety in this case. Normally we'd replace it with the real word since we don't like bowdlerisms.
1) Few-shot to zero-shot training for achieving a useful ability on a given new problem.
2) Self-determining optimal paths to fine-tuning at inference time based on minimal instructions or examples.
3) Having the capacity to self-correct, maybe by building or confirming heuristics.
All of these concern an intern, for example, who is given a new, unseen task and can figure out the rest without handholding.
Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.
So, an intelligence may have evolved in geological time or in laboratorical time, but the ability of the intelligence to learn to think and solve problems will distinguish it from the high rate of general failure.
comeonbro•4mo ago
Might want to write this out in full lol I thought this in particular was going to be a much more entertaining point.
zahlman•4mo ago