That's the most important paragraph in the article. All of the self serving excessive exaggerations of Sam Altman and his ilk, predicting things and throwing out figures they cannot possibly know. "ai will cure cancer, and demetia! And reverse global warming! Just give more money to my company which is a non profit and is working for the good of humanity. What is that? Do you mean to say you don't care about the good of humanity?" What is the word for such behaviour? It's not hubris, it's a combination of wild prophecy and severe main character syndrome.
I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone. Which is obviously nonsense but is exactly the kind of thing he might say.
In the meantime they're making loads of money by claiming expertise in a field which doesn't even exist and in my opinion never will, and that's the main thing i suppose.
That would be quite useless even if it exists since now that you said it, the AGISGIAIsomething will surely know about it and take appropriate measures!
There is...chanting in team meetings in the US?
Has this been going for long now or is this some new trend picked up in Asia or something like that?
This is a meme that will keep on giving.
[1] https://www.mercurynews.com/2020/11/25/theranos-founder-holm...
I expect this definition will be proven incorrect eventually. This definition would best be described as a "human level AGI", rather than AGI. AGI is a system that matches a core set of properties, but it's not necessarily tied to capabilities. Theoretically one could create a very small resource-limited AGI. The amount of computational resources available to the AGI will probably be one of the factors what determines whether it's e.g. cat level vs human level.
Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.
[1] https://www.noemamag.com/artificial-general-intelligence-is-...
Another quote: "Trying GPT-4.5 has been much more of a 'feel the AGI' moment among high-taste testers than I expected!"
If we were dogs, we'd invent a basic computer and start writing scifi films about whether the computers could secretly smell things. We'd ask "what does the sun smell like?"
They have, including multiple times in this very article, but the author's not willing to listen. As he says later:
> But set aside the technical objections—what if it doesn't continue to get better?—and you’re left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And it’s not.
Modern AI researchers have proven that this is not true. They routinely increase the intelligence of systems by training on different data, using different compute, or applying different network architectures. But the author is absolutely convinced that this can't be so, so when researchers straightforwardly explain that they have done this, he's stuck trying to puzzle out what they could possibly mean. He references "Situational Awareness", an essay that includes detailed analyses of how researchers do this and why we should expect similar progress to continue, but he interprets it as a claim that "you don’t need cold, hard facts" because he presumes that the facts it presents can't possibly be true.
So, if you assume that AGI is fake and impossible, it's... A conspiracy. Sure.
Though, if you just finished quoting Turing (and folks like von Neumann), who thought it is possible, it would be good form for you to offer some reasoning that it's impossible, without alluding to the ineffable human soul or things like that.
That seems like a bad straw-man for "AI boosterism has the following hallmarks of conspiratorial thinking".
> offer some reasoning that it's impossible
Further on, the author has anticipated your objection:
> And there it is: You can’t prove it’s not true. [...] Conspiracy thinking looms again. Predictions about when AGI will arrive are made with the precision of numerologists counting down to the end of days. With no real stakes in the game, deadlines come and go with a shrug. Excuses are made and timelines are adjusted yet again.
No more than yelling "electricity is conspiracy thinking/Satan's plaything!" repeatedly would have stopped engineers in the 19th century from studying and building with it.
We don't have to save everybody, but only by trying to we save some.
That the claims appear extreme and apocalyptic doesn't tell us anything about correctness.
Yes, there are tons of people saying nonsense, but look back at events. For a while it seemed as though AI was improving extremely quickly. People extrapolated from that. I wouldn't call that extrapolation irrational or conspiratorial, even if it proves incorrect.
If they discussed what a future moon landing might be like or how it could work, they would be a futurist.
If they were raising funds for a moon landing that they are currently working on, and success is surely imminent, despite not having any evidence that they can achieve it, or that they have beaten the technical hurdles necessary to do so, then they would be seen as a fraud.
It doesn’t really matter that at some point in the future the moon landings happened.
Why would anyone subject themselves to so much hatred? Have some standards.
The days of plain text Google AdWords are long, long gone.
In fact generating ad views and not purchasing things from them reduces the value of ads to the website.
If we define it as "a machine that can match humans on a wide range of cognitive tasks," that begs the questions: which humans? Which range? What cognitive tasks? I honestly think there is no answer you could give to these three alone that wouldn't cause everything to break down again:
For the first question, if you say "all humans," how do you measure that?
If we use IQs? If so, then you have just created an AI which is able to match the average IQ of whatever "all" happens to be. I'm pretty sure (though have no data to prove) that the vast super-majority of people don't take IQ tests, if they've ever even heard of them. So that limits your set to "all the IQ scores we have". But again... Who is we? Which test organization? There are quite a few IQ testing centers/orgs, and they all have variations in their metrics, scoring, weights, etc.
If you measure it by some other thing, what's the measurement? What's the thing? And, does that risk us spiraling into an infinite debate on what intelligence is? Because if so, the likelihood of us ever getting an AGI is nil. We've been trying to define intelligence for literally thousands of years and we still can't find a definition that is even halfway universal.
If you say anything other than all, like "the smartest humans" or "the humans we tested it against," well... Do I really need to explain how that breaks?
For the second and third questions, I honestly don't even know what you'd answer. Is there even one? Even if we collapse the second and third questions into "what wide range of cognitive tasks?", who creates the range of tasks? Are these tasks ones any human from, lets say, age 5 onward would be capable of doing? (Even if you answer yes here, what about those with learning disabilities or similar who may not be able to do whatever tasks you set at that age because it takes them longer to learn?) Or, are they tasks a PhD student would be able to do? (If you do this, then you've just broken the definition again.)
Even if we rewrite the definition to be narrower and less hand-wavy, like, an AI which matches some core properties or something, as was suggested elsewhere in these comments, who defines the properties? How do we measure them? How do we prove that us comparing the AI against these properties doesn't cause us to optimize for the lowest common denominator?
Also, in retrospect, something doesn't quite add up about the 'AI winter' narrative. It's hard to believe that so many people were studying and working on AI and it took so long given that ultimately, attention is all you need(ed).
I studied AI at university in Australia over a decade ago, did the introductory course which was great; we learned about decision trees, Bayesian probability and machine learning; we wrote our own ANNs from scratch. then I took on the advanced course, expecting to be blown away by the material, but the whole course was about mathematics, no AI theory; even back then there was a lot of advanced material which they could have covered (e.g. evolutionary computation) but didn't... I dropped out after a week or two because of how boring it was.
In retrospect, I feel like the course was made boring and irrelevant on purpose. I remember I even heard someone in my entourage mention that AI winter is not real... While we were supposedly in the middle of it.
Also, I remember thinking at the time that evolutionary computation combined with ANNs was going to be the future... So I was kind of surprised how evolutionary computation seemingly disappeared out of view... In retrospect though, I think to myself; progress in that area could potentially lead to unpredictable and dangerous outcomes so it may not be discussed openly.
Now I think; take an evolutionary algorithm and combine it with modern neural nets with attention mechanisms and you'd surely get some impressive results.
Terr_•2h ago
> a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.
Sometimes 90% of the "hidden truths" are things already "known" by the believers, an elite knowledge that sets them apart from the sheeple. The remaining 10% is acquiring some McGuffin that finally proves they were Right-All-Along so that they can take a victory lap.
> Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.
In turn, AGI was the hot new flavor—AI but better!—companies pivoted to as consumers started getting disappointed/jaded experiencing "AI" that wasn't going to give them robot butlers.
> When those people are not shilling for utopia, they’re saving us from hell.
Yeah, much like how hatred is not really the opposite of love, the "AI doom" folks are really just a side-sect of the "AI awesome" folks.
> But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s.
Yes, the economic engine behind all this, the potential to make money, is what really supercharges everything and lifts it out of niche communities.