This is called the pessimism effect. People who deny things by only looking at one small aspect of reality while ignoring the overarching trend.
Follow the trendline of the ML for the last decade. We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance. But there is a clear trendline of linear upwards progress and at times the random chance accelerates us past the linear upward trend.
Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture. You’re drilling down on a specific ML problem and a specific model.
I believe we will see agi within our life time but when we see it the goal posts will have moved and the internet will be loaded with so much ai slop that we won’t be amazed at it. Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)
> But there is a clear trendline of linear upwards progress
This is not the case at all.[1]
And after too much time without a data wipe those droids go off the freaking rails becoming too self aware and then people just treat it like it’s no big deal and an annoyance.
This is the future of AI. AI will be a retarded assistant and everyone will be bored with it.
Idling our way up an illusory social/career escalator the elders convinced us was real.
Too real. Time to be done with the internet for the day. And it’s barely noon.
Exactly. Just 100 years ago AI did not exist at all. Hell, (electronic) computers did not even exist then.
In that incredibly short timeframe of development AI is coming very close to surpassing what took biological evolution millions of years (or even surpassing it in specific domains). If you take the time it took to go from chimp to human compared to the time it took from the first animal to chimp and assume that scales linearly to AI evolution, we are very, very close to a similar step there.
Of course, it's not that simple and the assumption is bound to be wrong, but to think it might take another 100 years seems misguided given the rapid development in the past.
to this day, the improvement since the original API version of GPT4 (later heavily downgraded without a name change) has been less than amazing. context size increased dramatically, yes, but it's still pitiful, slow and brutally expensive.
LLMs can't truly reason. It's not about hallucinations. LLMs are fundamentally designed NOT to be intelligence. Is my Intellij autocomplete AGI?
> Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)
I can only respond with a picture
https://substack.com/@msukhareva/note/c-131901009
> We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance.
Yes, I enjoy being 19% slowed down by AI tooling, that's real breakneck pace.
https://www.infoworld.com/article/4020931/ai-coding-tools-ca...
Just because this breed of autocomplete can drown you in slop very fast doesn't mean we are advancing. If anything, we are regressing.
They hallucinate because they're aren't actually working the way you do. They're playing with words. They don't have any kind of mental model -- even though they do an extraordinary mimicry of one.
An analogy: it's like trying to parse XML with a regular expression. You may get it to work in 99.99% of your use cases, but it's still completely wrong. Filtering out bad results won't get you there.
That said, the "extraordinary mimicry" is far, far beyond anything I could possibly have imagined. LLMs pass the Turing test with flying colors, without being AGI, and I would have sworn that the one implied the other. So it's entirely possible that we're closer than I think.
A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.
Human brains are quasi deterministic. It’s just chaos from ultimately determinist phenomena which can be modeled as a “heat parameter”.
> it only ever responds to a query. It is not at all autonomous.
We can give it feedback loops like COT and you can even have it talk to itself. Then if you think of the feedback loop as the entire system it is autonomous. Humans are actually doing the same thing, our internal thought process is by definition a feedback loop.
> If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails.
But this isn’t scripted. This is more the AI goes crazy. Scripting isn’t a characteristic that accurately describes anything that’s going on.
AI hallucinates and goes off the rails isn’t characteristic of scripting its characteristic of lack of control. We can’t control AI.
You know, in case you correctly interpreted the headline to mean Wells is saying aliens developed AI out there.
Yes, it may not be AGI and AGI may not come any time soon, but by focusing on that question, people become distracted and don't have as much time to think about how parasitic big tech really is. If it's not a strategy used consciously, it's rather seredipitous for them that the question has come about.
I'm not sure what you're trying to say. Most people don't know the difference between AI and AGI. It's all hype making people thinking it's a big deal.
I have family that can't help but constantly text about AI this and AI that. How dangerous it might be or revolutionize something else.
It’s not just LLMs that were a leap and bound. For the past decade and more ML has been rising at a breakneck velocity. We see models for scene recognition, models that can read your mind, models that recognize human movement. We were seeing the pieces and components and amazing results constantly for over 10 years and this is independent of LLMs.
And then everyone thinks AI is thousands of years away because we hit a small blip with LLMs in 2 years.
And here’s the thing. The blip isn’t even solid. Like LLMs sometimes gets shit wrong and sometimes gets shit right we just can’t control it. Like we can’t definitively say LLMs can’t answer a specific question. Maybe another LLM can get it right, maybe if prompted a different way it will get it right.
The other strange thing is that the LLM shows signs of lying. Like it’s not truthful. It has knowledge of the truth but the things purpose is not really to tell us the truth.
I guess the best way to put it that current AI sometimes behaves like AGI and sometimes doesn’t. It is not consistently AGI. The fact that we built a machine that inconsistently acts like agi shows how freaking close we are.
But the reality is no one understands how LLMs work. This fact is definitive. Like if you think we know how LLMs work then you are out of touch with reality. Nobody knows how LLMs work so this article and my write up are really speculation. We really dont know.
But the 10 year trendline of AI in general is the one that has a more accurate trendline into future progress. Basing the future off a 2 year trendline of a specific problem with a specific model of ML of LLMs hallucinating is not predictive.
You can. archive.ph Copy the link, paste the link.
Needs to be pointed out :) If I move billions of Light-Years from here I will be able to create AI :) A Light-Year is distance, the title should say maybe "decades away".
But I fully believe her argument, I think kids being born today will not see any real AI implementation in their lifetime.
noiv•5h ago