frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Andrej Karpathy – AGI is still a decade away

https://www.dwarkesh.com/p/andrej-karpathy
233•ctoth•2h ago

Comments

agrover•1h ago
This seems to be the growing consensus.
sputknick•1h ago
There is a very strange totally coincidental correlation where if you are smart and NOT trying to raise money for an AI start-up, you think AGI is far away, and if you are smart and actively raising money for an AI start-up, then AGI is right around the corner. One of those odd coincidences of modern life
evandrofisico•1h ago
And self sustained nuclear fusion is 20 years away, perpetually. On which evidence can he affirm a timeline for AGI when we can barely define intelligence?
helterskelter•1h ago
I'd argue we've had more progress towards fusion than AGI.
FiniteIntegral•1h ago
Yet at the same time "towards" does not equate to "nearing". Relative terms for relative statements. Until there's a light at the end of the tunnel, we don't know how far we've got.
chasd00•49m ago
> I'd argue we've had more progress towards fusion than AGI.

way more pogress toward fusion than AGI. Uncontrolled runaway fusion reactions were perfected in the 50s (iirc) with the thermonuclear bombs. Controllable fusion reactions have been common for many years. A controllable, self-sustaining, and profitable fusion reaction is all that is left. The goalposts that mark when AGI has been reached haven't even been defined yet.

CaptainOfCoit•1h ago
And a program that can write, sound and paint like a human was 20 years away perpetually as well, until it wasn't.
galangalalgol•1h ago
This is the key insight I believe. It is inherently unpredictable. There are species that pass the mirror test with a far fewer equivalent number of parameters than large models are using already. Carmack has said something to the effect that about 10ksloc would glue the right existing achictectures together in the right way to make agi, but that it might take decades to stumble on that way, or someone might find it this afternoon.
walterbell•1h ago
> like a human

Humans have since adapted to identify content differences and assign lower economic value to content created by programs, i.e. the humans being "impersonated" and "fooled" are themselves evolving in response to imitation.

woodruffw•1h ago
Is this true? I think it’s equally easy to claim that these phenomena are attributable to aesthetic adaptability in humans, rather than the ability of a machine to act like a human. The machine still doesn’t possess intentionality.

This isn’t a bad thing, and I think LLMs are very impressive. But I do think we’d hesitate to call their behavior human-like if we weren’t predisposed to anthropomorphism.

input_sh•1h ago
Another way to put it is that it writes, sounds and paints as the Internet's most average user.

If you train it on a bunch of paintings whose quality ranges from a toddler's painting to Picasso's, it's not going to make one that's better than Picasso's, it's going to output something more comparable to the most average painting it was trained on. If you then adjust your training data to only include world's best paintings ever since we began to paint, the outcome is going to improve, but it'll just be another better-than-human-average painting. If you then leave it running 24/7, it'll churn out a bunch of better-than-human-average paintings, but there's still an easily-identifiable ceiling it won't go above.

An oracle that always returns the most average answer certainly has its use cases, but it's fundamentally opposed to the idea of superintelligence.

CaptainOfCoit•26m ago
> Another way to put it is that it writes, sounds and paints as the Internet's most average user.

Yes, I agree, it's not high quality stuff it produces exactly, unless the person using it already is an expert and could produce high quality stuff without it too.

But there is no denying it that those things were regarded as "far-near future maybe" for a long time, until some people put the right pieces together.

adastra22•1h ago
Fusion used to be perpetually 30 years away. We’re making progress!
nh23423fefe•1h ago
stop repeating that. first, it isn't true that intelligence is barely defined. https://arxiv.org/abs/0706.3639

second a definition is obviously not a prerequisite as evidenced by natural selection

mathgradthrow•1h ago
An Arxiv paper listing 70 different definitions of intelligence is not the evidence that you seem to think it is.
thomasdziedzic•1h ago
> stop repeating that. first, it isn't true that intelligence is barely defined. https://arxiv.org/abs/0706.3639

I don't think he should stop, because I think he's right. We lack a definition of intelligence that doesn't do a lot of hand waving.

You linked to a paper with 18 collective definitions, 35 psychologist definitions, and 18 ai researcher definitions of intelligence. And the conclusion of the paper was that they came up with their own definition of intelligence. That is not a definition in my book.

> second a definition is obviously not a prerequisite as evidenced by natural selection

right, we just need a universe, several billions of years and sprinkle some evolution and we'll also get intelligence, maybe.

BoredPositron•1h ago
AGI is going to be the new fusion.
Topgamer7•1h ago
Whenever someone brings up "AI", I tell them AI is not real AI. Machine learning is a more apt buzzword.

And real AI is probably like fusion. Its always 10 years away.

CharlesW•1h ago
> Whenever someone brings up "AI", I tell them AI is not real AI.

You and also everyone since the beginning of AI. https://quoteinvestigator.com/2024/06/20/not-ai/

zamadatix•1h ago
People saying that usually mean it as "AI is here and going to change everything overnight now" yet, if you take it literally, it's "we're actually over 50 years into AI, things will likely continue to advance slowly over decades".

The common thread between those who take things as "AI is anything that doesn't work yet" and "what we have is still not yet AI" is "this current technology could probably have used a less distracting marketing name choice, where we talk about what it delivers rather than what it's supposed to be delivering".

lo_zamoyski•1h ago
AI is in the eye of the beholder.
adastra22•1h ago
Machine learning as a descriptive phrase has stopped being relevant. It implies the discovery of information in a training set. The pre-training of an LLM is most definitely machine learning. But what people are excited and interested in is the use of this learned data in generative AI. “Machine learning” doesn’t capture that aspect.
hnuser123456•1h ago
It's a valid term that is worth introducing to the layperson IMO. Let them know how the magic works, and how it doesn't.
adastra22•1h ago
Machine learning is only part of how an LLM agent works though. An essential part, but only a part.
sdenton4•1h ago
I see a fair amount of bullshit in the LLM space though, where even cursory consideration would connect the methods back to well-known principles in ML (and even statistics!) to measure model quality and progress. There's a lot of 'woo, it's new! we don't know how to measure it exactly but we think it's groundbreaking!' which is simply wrong.

From where I sit, the generative models provide more flexibility but tend to underperform on any particular task relative to a targeted machine learning effort, once you actually do the work on comparative evaluation.

adastra22•1h ago
I think we have a vocabulary problem here, because I am having a hard time understanding what you are trying to say.

You appear to be comparing apples to oranges. A generation task is not a categorization task. Machine learning solves categorization problems. Generative AI uses model trained by machine learning methods, but in a very different architecture to solve generative problems. Completely different and incomparable application domain.

IshKebab•1h ago
How does "it's called machine learning not AI" help anyone know how it works? It's just a fancier sounding name.
hnuser123456•17m ago
Because if they're curious, they can look up (or ask an "AI") about machine learning, rather than just AI, and learn more about the capabilities and difficulties and mechanics of how it works, learn some of the history, and have grounded expectations for what the next 10 years of development might look like.
simpleladle•1h ago
But the things we try to make LLMs do post-pre-training are primarily achieved via reinforcement learning. Isn't reinforcement learning machine learning? Correct me if I'm misconstruing what you're trying to say here
adastra22•43m ago
You are still talking about training. Generative applications have always been fundamentally different from classification problems, and has now (in the form of transformers and diffusion models) taken on entirely new architectures.

If “machine learning” is taken to be so broad as to include any artificial neural network, all of which are trained with back propagation these days, then it is useless as a term.

The term “machine learning” was coined in the era of specialized classification agents that would learn how to segment inputs in some way. Thing email spam detection, or identifying cat pictures. These algorithms are still an essential part of both the pre-training and RLHF fine tuning of LLM models. But the generative architectures are new and very essential to the current interest in and hype surrounding AI at this point in time.

CSSer•1h ago
The best part of this is I watched Sam Altman say he really thinks fusion is a short period of time away in response to a question about energy consumption a couple years ago. That was the moment I knew he's a quack.
ctkhn•1h ago
Not to be anti YC on their forum, but the VC business model is all about splashing cash on a wide variety of junk that will mostly be worthless, hyping it to the max, and hoping one or two is like amazon or facebook. He's not an engineer, he's like Steve Jobs without the good parts.
2OEH8eoCRo0•1h ago
Fusion is known science while AGI is still very much an enigma.
timeon•1h ago
He had to use distraction because he knows that he is doing part in increasing emissions.
jacobolus•1h ago
Altman recently said, in response to a question about the prospect of half of entry-level white-collar jobs being replaced by "AI" and college graduates being put out of work by it:

> “I mean in 2035, that, like, graduating college student, if they still go to college at all, could very well be, like, leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job, and feeling so bad for you and I that, like, we had to do this kind of, like, really boring old kind of work and everything is just better."

Which should be reassuring to anyone having trouble finding an entry-level job as an illustrator or copywriter or programmer or whatever.

rightbyte•3m ago
So STNG in 10 years?
wilg•1h ago
Arguing about the definitions of words is rarely useful.
Spare_account•1h ago
How can we discuss <any given topic> if we are talking about different things?
IanCal•1h ago
Well that's rather the point - arguing about exceptionally heavily used terminology isn't useful because there's already a largely shared understanding. Stepping away from that is a huge effort, unlikely to work and at best all you've done is change what people mean when they use a word.
bcrosby95•1h ago
The point is to establish definitions rather than argue about them. You might save yourself from two pointless arguments.
Root_Denied•1h ago
Except AI already had a clear definition well before it started being used as a way to inflate valuations and push marketing narratives.

If nothing else it's been a sci-fi topic for more than a century. There's connotations, cultural baggage, and expectations from the general population about what AI is and what it's capable of, most of which isn't possible or applicable to the current crop of "AI" tools.

You can't just change the meaning of a word overnight and toss all that history away, which is why it comes across as an intentionally dishonest choice in the name of profits.

layer8•1h ago
Maybe do some reading here: https://en.wikipedia.org/wiki/History_of_artificial_intellig...
Root_Denied•25m ago
And you should do some reading into the edit history of that page. Wikipedia isn't immune from concerted efforts to astroturf and push marketing narratives.

More to the point, the history of AI up through about 2010 talks about attempts to get it working using different approaches to the problem space, followed by a shift in the definitions of what AI is in the 2005-2015 range (narrow AI vs. AGI). Plenty of talk about the various methods and lines fo research that were being attempted, but very little about publicly pushing to call commercially available deliverables as AI.

Once we got to the point where large amounts of VC money was being pumped into these companies there was an incentive to redefine AI in favor of what was within the capabilities and scope of machine learning and LLMs, regardless of whether that fit into the historical definition of AI.

bcrosby95•1h ago
AI is an overloaded term.

I took an AI class in 2001. We learned all sorts of algorithms classified as AI. Including various ML techniques. Under which included perceptrons.

porphyra•1h ago
back in the day alpha-beta search was AI hehe
pixelpoet•1h ago
As a young child in Indonesia we had an exceptionally fancy washing machine with all sorts of broken English superlatives on it, including "fuzzy logic artificial intelligence" and I used to watch it doing the turbo spin or whatever, wondering what it was thinking. My poor mom thought I was retarded.
timidiceball•1h ago
That was an impressive takeaway from the first machine learning course i took: that many things previously under the umbrella of Artificial Intelligence have since been demystified and demoted to implementations we now just take for granted. Some examples were real world map route planning for transport, locating faces in images, Bayesian spam filters.
brandonb•1h ago
Andrew Ng has a nice quote: “Instead of doing AI, we ended up spending our lives doing curve fitting.”

Ten years ago you'd be ashamed to call anything "AI," and say machine learning if you wanted to be taken seriously, but neural networks have really have brought back the term--and for good reason, given the results.

layer8•1h ago
AI is whatever is SOTA in the field, has always been.
kachapopopow•1h ago
AGI is already here if you shift some goal posts :)

From skimming the conversation it seems to mostly revolve around LLMs (transformer models) which is probably not going to be the way we obtain AGI to begin with, frankly it is too simple to be AGI, but the reason why there's so much hype is because it is simple to begin with so really I don't know.

ecocentrik•1h ago
LLMs are close enough to pass the Turing Test. That was a huge milestone. They are capable of abstract reasoning and can perform many tasks very well but they aren't AGI. They can't teach themselves to play chess at the level of a dedicated chess engine or fly an airplane using the same model they use to copypasta a React UI. They can only fool non-proficient humans into believing that they might be capable of doing those things.
password54321•20m ago
Turing Test was a thought experiment not a real benchmark for intelligence. If you read the paper the idea originated from it is largely philosophical.

As for abstract reasoning, if you look at ARC-2 it is barely capable though at least some progress has been made with the ARC-1 benchmark.

throwaway-0001•22m ago
A transistor is very simple too, and here we are. Don’t dismiss something because it’s simple.
password54321•18m ago
You got to look at how it scales. LLMs have already stopped increasing in parameter count as they don't get better by scaling them up anymore. New ideas are needed.
throwaway-0001•12m ago
You’re right… but still, what was done until today is significant and useful already
ares623•1h ago
Is that at current investment levels?
konart•1h ago
2035 singularity etc
spydum•1h ago
2038 will be more significant
edbaskerville•1h ago
For a second I thought you were citing some special mythological timeline from AI folks.

Then I got it. :) Something so mundane that maybe the AIs can help prevent it.

asdev•1h ago
Are researchers scared to just come out and say it because they'll be labeled as wrong if the extreme tail case happens?
andy_ppp•1h ago
No, it’s because of money and the hype cycle.
ionwake•1h ago
I mean you say this, but I havent touched a line of code as a programmer in months, having been totally replaced by AI.

I mean sure I now "control" the AI, but I still think these no AGI for 2 decades claims are a bit rough.

andy_ppp•1h ago
I think AI is great and extremely helpful but if you’ve been replaced already maybe you have more time now to make better code and decisions? If you think the AI output is good by default I think maybe that’s a problem. I think general intelligence is something other than what we have now, these systems are extremely bad at updating their knowledge and hopelessly at applying understanding from one area to another. For example self driving cars are still extremely brittle to the point of every city needing new and specific training - you can just take a car with controls on the opposite side to you and safely drive in another country.
an0malous•30m ago
Let’s see the code
Goofy_Coyote•1h ago
I don’t think they’re scared, I think they know it’s a lose-tie game.

If you’re correct, there’s not much reward aside from the “I told you so” bragging rights, if you’re wrong though - boy oh boy, you’ll be deemed unworthy.

You only need to get one extreme prediction right (stock market collapse, AI taking over, etc ), then you’ll be seen as “the guru”, the expert, the one who saw it coming. You’ll be rewarded by being invited to boards, panels and government councils to share your wisdom, and be handsomely paid to explain, in hindsight, why it was obvious to you, and express how baffling it was that no one else could see what you saw.

On the other hand, if predict an extreme case and you get it wrong, there’s virtually 0 penalties, no one will hold that against you, and no one even remembers.

So yeah, fame and fortune is in taking many shots at predicting disasters, not the other way around.

strangattractor•1h ago
They are afraid to say it because it may affect the funding. Currently with all the hype surrounding AI investors and governments will literally shower you with funding. Always follow the money:) Buy the dream - sell the reality.
strangattractor•1h ago
Also I think Andrej is just an honest guy.
segmondy•1h ago
AGI is already here.
chronci739•1h ago
> AGI is already here.

cause elon musk says FSD is coming in 2017?

adastra22•1h ago
Because we already have artificial (man-made) general (contrast with domain specific) intelligence (algorithmic problem solvers). A.G.I.

If ChatGPT is not AGI, somebody has moved goalposts.

walkabout•1h ago
I think a baseline requirement would be that it… thinks. That’s not a thing LLMs do.
adastra22•1h ago
That’s an odd claim, given that we have so-called thinking models. Is there a specific way you have in mind in which LLMs are not thinking processes?
blibble•38m ago
I can call my cat an elephant

it doesn't make him one

010101010101•19m ago
Both "general" and "intelligence" are _at least_ easily arguable without moving any goal posts, not that goal posts have ever been well established in the first place.
010101010101•1h ago
Where are you because it’s sure not where I am…
segmondy•1h ago
5 years ago, everyone would agree that what we have today is AGI.
rvz•1h ago
No-one agrees on what is even AGI, except for the fact that the definitions change more times that the weather which makes it meaningless.
zeknife•39m ago
At least until they spend some time with it
password54321•33m ago
AI psychosis is already here.
imiric•1h ago
It has always been "a decade away".

But nothing will make grifters richer than promising it's right around the corner.

notepad0x90•1h ago
I'm betting we'll have either cold fusion or the "year of the linux desktop" (finally) before AGI.
Mistletoe•1h ago
Good because we have no framework whatsoever enabled for if it is legal or ethical to turn it off. Is that murder? I think so.
lyu07282•1h ago
We don't even have any intention to do anything about millions of people loosing their jobs and driven into poverty by it, in fact the investments right now gamble/depend on that wealth transfer to happen in the future. We don't even give a shit about other humans, there is absolutely no way we will care about a (hypothetical) different life form entirely.
qgin•1h ago
We'll be living in a world of 50% unemployment and still debating whether it's "true AGI"
ciconia•1h ago
It's funny how there's such a pervasive cynicism about AI in the developer community, yet everyone is still excited about vibe coding. Strange times...
leptons•54m ago
What developer is excited about "vibe coding"? The only people excited about "vibe coding" are people who can't code.
deadbabe•1h ago
Frankly it doesn’t matter if it’s a decade away.

AI has now been revealed to the masses. When AGI arrives most people will barely notice. It will just feel like slightly better LLMs to them. They will have already cemented notions of how it works and how it affects their lives.

angiolillo•1h ago
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra

The debate about AGI is interesting from a philosophical perspective, but from a practical perspective AI doesn't need to get anywhere close to AGI to turn the world upside down.

flatline•1h ago
I don’t even know what AGI is, and neither does anyone else as far as I can tell. In the parts of the video I watched, he cites several things missing which all have to do with autonomy: continual automated updates of internal state, fully autonomous agentic behavior, etc.

I feel like GPT 3 was AGI, personally. It crossed some threshold that was both real and magical, and future improvements are relying on that basic set of features at their core. Can we confidently say this is not a form of general intelligence? Just because it’s more a Chinese Room than a fully autonomous robot? We can keep moving the goalposts indefinitely, but machine intelligence will never exactly match that of humans.

mpalmer•1h ago

    It crossed some threshold that was both real and magical
Only compared to our experience at the time.

    and future improvements are relying on that basic set of features at their core
Language models are inherently limited, and it's possible - likely, IMO - that the next set of qualitative leaps in machine intelligence will come from a different set of ideas entirely.
zer00eyz•1h ago
Learning != Training.

Thats not a period, it's a full stop. There is no debate to be had here.

IF an LLM makes some sort of breakthrough (and massive data collation allows for that to happen) it needs to be "re trained" to absorb its own new invention.

But we also have a large problem in our industry, where hardware evolved to make software more efficient. Not only is that not happening any more but we're making our software more complex and to some degree less efficient with every generation.

This is particularly problematic in the LLM space: every generation of "ML" on the llm side seems to be getting less efficient with compute. (Note: this isnt quite the case in all areas of ML, yolo models working on embedded compute is kind of amazing).

Compactness, efficiency and reproducibility are directions the industry needs to evolve in, if it ever hopes to be sustainable.

throw54465665•1h ago
Most humans do not even have a general intelligence! Many students are practically illiterate, and can not even read and understand book or manual!

We are approaching situation, where AI will make most decisions, and people will wear it as a skin suit, to fake competency!

zeroonetwothree•1h ago
I wouldn’t say that any specific skill (like literacy) is required to have intelligence. It’s more the capability to learn skills and build a model of the world and the people in it using abstract reasoning.

Otherwise we would have to say that pre-literacy societies lacked intelligence, which would be silly since they are the ones that invented writing in the first place!

throw54465665•56m ago
But most people can not even comprehend an audiobook!

> capability to learn skills and build a mode

We are adults, not children! At some point brain looses plasticity, and it is very difficult to learn new stuff!

And good luck competing with asians or AI!

AnimalMuppet•45m ago
Most people cannot comprehend an audiobook? No way.

If you have evidence for that claim, show it. Otherwise, no, you're just making stuff up.

throw54465665•41m ago
Sorry, it should be "most americans"!

Very simple proof, they can not even read/listen to their own constitution!

zeroonetwothree•1h ago
I think most people would consider AGI to be roughly matching that of humans in all aspects. So in that sense there’s no way that GPT3 was AGI. Of course you are free to use your own definition, I’m just reflecting what the typical view would be.
colonCapitalDee•1h ago
AGI is when a computer can accomplish every cognitive task a typical human can. Given tools to speak, hear, and manipulate a computer, an AGI could be dropped in as a remote employee and be successful.
throwaway-0001•4m ago
A human is agi when can accomplish all tasks of ChatGPT… how come the reverse doesn’t work?
zeknife•1h ago
It also doesn't need to be good for anything to turn the world upside down, but it would be nice if it was
IshKebab•1h ago
Fortunately I haven't heard anyone make silly claims about stochastic parrots and the impossibility of conscious computers for quite a while.
jaccola•1h ago
I think this quote is often misapplied. The question "can a submarine safely move through water" IS a very interesting question (especially if you are planning a trip in one!).

Obviously this quote would be well applied if we were at a stage where computers were better at everything humans can do and some people were saying "This is not AGI because it doesn't think exactly the same as a human". But we aren't anywhere near this stage yet.

hatmanstack•1h ago
Am I dating myself by thinking Kurzweil is still relevant?

2029: Human-level AI

2045: The Singularity - machine intelligence 1 billion times more powerful than all human intelligence

Based on exponential growth in computing. He predicts we'll merge with AI to transcend biological limits. His track record is mixed, but 2029 looks more credible post-GPT-5. The 2045 claim remains highly speculative.

Barrin92•1h ago
It's curious that Kurzweil's predictions about transcending biology align so closely with his expected lifespan. Reminds me of someone saying, if you ask a researcher for a timeline of a breakthrough they'll give you the expected span of their career.

Hegel thought history ended with the Prussian state, Fukuyama thought it ended in liberal America, Paul thought judgement day was so close you need not bother to marry, the singularity always comes around when the singularians get old. Funny how that works

williamcotton•1h ago
The biggest problem I've had with Kurzweil and the exponential growth curve is that the elbow depends entirely on how you plot and scale the axis. With a certain vantage point we have arguably been on an exponential curve since the advent of Homo Sapiens.
somenameforme•1h ago
I lost all respect for him after reading about his views on medical immortality. His argument is that over time human life expectancy has been constantly increasing * and he calculated that based on some arbitrary rate of acceleration, that science would be expanding human life expectancy by more than a year, per year - medical immortality in other words, and all expected to happen just prior to the time he's reaching his final years.

The overwhelming majority of all gains in human life expectancy have come due to reductions in infant mortality. When you hear about things like a '40' year life expectancy in the past it doesn't mean that people just dropped dead at 40. Rather if you have a child that doesn't make it out of childhood, and somebody else that makes it to 80 - you have a life expectancy of ~40.

If you look back to the upper classes of old their life expectancy was extremely similar to those of today. So for instance in modern history, of the 15 key Founding Fathers, 7 lived to at least 80 years old: John Adams, John Quincy Adams, Samuel Adams, Jefferson, Madison, Franklin, John Jay. John Adams himself lived to 90. The youngest to die were Hamilton who died in a duel, and John Hancock who died of gout of an undocumented cause - it can be caused by excessive alcohol consumption.

All the others lived into their 60s and 70s. So their overall life expectancy was pretty much the same as we have today. And this was long before vaccines or even us knowing that surgeons washing their hands before surgery was a good thing to do. It's the same as you go back further into history. A study [1] of all men of renown in Ancient Greece was 71.3 [1], and that was from thousands of years ago!

Life expectancy at birth is increasing, but longevity is barely moving. And as Kurzweil has almost certainly done plentiful research on this topic, he is fully aware of this. Cognitive dissonance strikes again.

[1] - https://pubmed.ncbi.nlm.nih.gov/18359748/

asah•17m ago
This is backward looking. Future advances don't have to work like this

Example: 20ish years ago, stage IV cancer was a quick death sentence. Now many people live with various stage IV cancers for many years and some even "die of sending else" these advancements obviously skew towards helping older people.

seydor•1h ago
I wouldn't consider either of them qualified to answer that question
awesome_dude•1h ago
I have massive respect for Andrej, my first encounter with "him" was following his tutorials/notes when he was a grad student/tutor for AI/ML.

I was a lot disappointed when he went to work for Tesla, and I think that he had some achievement there, butnot nearly the impact I believe he potentially has.

His switch (back?) to OpenAI was, in my mind, much more in keeping with where his spirit really lies.

So, with that in mind, maybe I've drunk too much kool aid, maybe not. But I'm in agreement with him, the LLMs are not AGI, they're bloody good natural language processors, but they're still regurgitating rather than creating.

Essentially that's what humans do, we're all repeating what our education/upbringing told us worked for our lives.

But we all recognise that what we call "smart" is people recognising/inventing ways to do things that did not exist before. In some cases its about applying a known methodset to a new problem, in others its about using a substance/method in a way that other substances/methodsets are used, but the different substance/methodset produces something interesting (think, oh instead of boiling food in water, we can boil food in animal fats... frying)

AI/LLMs cannot do this, not at all. That spark of creativity is agonisingly close, but, like all 80/20 problems, is likely still a while away.

The timeline (10 years) - it was the early 2010s (over 10 years ago now) that the idea of backward propagation, after a long AI winter, finally came of age. It (the idea) had been floating about since at least the 1970s. And that ushered in the start of our current revolution, that and "Deep Learning" (albeit with at least another AI winter spanning the last 4 or 5 years until LLMs arrived)

So, given that timeline, and the restraints in the currrent technology, I think that Andrej is on the right track, and it will be interesting to see where we are in ten years time.

chasd00•28m ago
if openAI didn't put a chat interface in front of an LLM and make it available to the public wouldn't we still be in the same AI winter? Google, Meta, Microsoft, all of the major players were doing lots of LLM work already, it wasn't until the general public found out through the OpenAI's website that it really took off. I can't remember who said it, it was some CEO, that OpenAI had no moat but nether did anyone else. They all had LLMs already of their own. Was the breakthrough the LLM or making it accessible to the general public?
throwaway-0001•10m ago
How to tell if you regurgitated this comment vs being truly creative? If you can show me objectively, I’m sold.
PedroBatista•1h ago
Following the comments here, yes: AGI is the new Cold Fusion.

However, don't let the bandwagon ( from either side ) cloud your judgment. Even warm fusion or any fusion at all is still very useful and it's here to stay.

This whole AGI and "the future" thing is mostly a VC/Banks and shovel sellers problem. A problem that has become ours too because the ridiculous amounts of money "invested", so even warm fusion is not enough from an investment vs expectations perspective.

They are already playing musical money chairs, unfortunately we already know who's going to pay for all of this "exuberance" in the end.

I hope this whole thing crashes and burns as soon as possible, not because I don't "believe" in AI, but because people have been absolutely stupid about it. The workplace has been unbearable with all this stupidity and amounts of fake "courage" about every single problem and the usual judgment of the value of work and knowledge your run-of-the-mill dipshit manager has now.

jb1991•1h ago
I would bet all of my assets of my life that AGI will not be seen in the lifetime of anyone reading this message right now.

That includes anyone reading this message long after the lives of those reading it on its post date have ended.

Which of course raises the interesting question of how I can make good on this bet.

colecut•1h ago
If you are right, you don't have to
rokkamokka•1h ago
Will you take a wager of my one dollar versus your life assets? :)
plaidfuji•1h ago
Should probably just short nvidia
asah•20m ago
Depends on the definition, I might take that bet because under some definitions were already here.

Example: better than average human across many thinking tasks is done.

1970-01-01•1h ago
Great quote:

"When you get a demo and something works 90% of the time, that’s just the first nine. Then you need the second nine, a third nine, a fourth nine, a fifth nine. While I was at Tesla for five years or so, we went through maybe three nines or two nines. I don’t know what it is, but multiple nines of iteration. There are still more nines to go.

That’s why these things take so long."

onlyrealcuzzo•1h ago
Importantly, the first 9s are the easiest.

If you need to get to 9 9s, the 9th 9 could be more effort than the other 8 combined.

6d6b73•1h ago
Even without AGI, current LLMs will change society in ways we can't yet imagine. And this is both good and bad. Current LLMs are just a different type of automation, not mechanical like control systems and robots, but intellectual. They don't have to be able to think independently, but as long as they automate some white-collar tasks, they will change how the rest of society works. The simple transistor is just a small electronic component that is a better version of a tube, and yet it changed everything in a few decades. How will the world change because of LLMs? I have no idea, but I know it doesn't have to be AGI to cause a lot of upheaval.
flyinglizard•1h ago
The same thing you described that makes LLM great also make them entirely non-deterministic and unreliable for serious automated applications.
password54321•35m ago
They can't even automate chat support the very thing you would think LLMs would be good at. Yet I always end up needing to talk to a person.
Imnimo•1h ago
>What takes the long amount of time and the way to think about it is that it’s a march of nines. Every single nine is a constant amount of work. Every single nine is the same amount of work. When you get a demo and something works 90% of the time, that’s just the first nine. Then you need the second nine, a third nine, a fourth nine, a fifth nine. While I was at Tesla for five years or so, we went through maybe three nines or two nines. I don’t know what it is, but multiple nines of iteration. There are still more nines to go.

I think this is an important way of understanding AI progress. Capability improvements often look exponential on a particular fixed benchmark, but the difficulty of the next step up is also often exponential, and so you get net linear improvement with a wider perspective.

czk•1h ago
like leveling to 99 in old school runescape
fbrchps•1h ago
The first 92% and the last 92%, exactly.
zeroonetwothree•1h ago
Or Diablo 2
wilfredk•1h ago
Perfect analogy.
somanyphotons•1h ago
This is an amazing quote that really applies to all software development
zeroonetwothree•1h ago
Well, maybe not all. I’ve definitely built CRUD UIs that were linear in effort. But certainly anything technically challenging or novel.
sdenton4•1h ago
Ha, I often speak of doing the first 90% of the work, and then moving on to the following 90% of the work...
inerte•1h ago
I use "The project is 90% ready, now we only have to do the other half"
typpilol•55m ago
92% is half actually - RuneScape Players
JimDabell•48m ago
> The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

— Tom Cargill, Bell Labs (September 1985)

https://dl.acm.org/doi/pdf/10.1145/4284.315122

zeroonetwothree•1h ago
When I worked at Facebook they had a slogan that captured this idea pretty well: “this journey is 1% finished”.
gowld•1h ago
Copied from Amazon's "Day 1".
fair_enough•1h ago
Reminds me of a time-honored aphorism in running:

A marathon consists of two halves: the first 20 miles, and then the last 10k (6.2mi) when you're more sore and tired than you've ever been in your life.

tylerflick•1h ago
I think I hated life most after 20 miles. Especially in training.
jakeydus•36m ago
This is 100% unrelated to the original article but I feel like there's an underreported additional first half. As a bigger runner who still loves to run, the first two or three miles before I have enough endorphins to get into the zen state that makes me love running is the first half, then it's 17 miles of this amazing meditative mindset. Then the last 10k sucks.
awesome_dude•14m ago
Just, ftr, endorphins cannot pass the blood brain barrier

http://hopkinsmedicine.org/health/wellness-and-prevention/th...

sarchertech•10m ago
Why just run 20 miles then?
rootusrootus•4m ago
Because then it wouldn't be a challenge and nobody would care about the achievement.
rootusrootus•5m ago
I suspect that is true for many difficult physical goals.

My dad told me that the first time you climb a mountain, there will likely be a moment not too distant from the top when you would be willing to just sit down and never move again, even at the risk to your own life. Even as you can see the goal not far away.

He also said that it was a dangerous enough situation that as a climb leader he'd start kicking you if he had to, if you sat down like that and refused to keep climbing. I'm not a climber myself, though, so this is hearsay, and my dad is long dead and unable to remind me of what details I've forgotten.

ekjhgkejhgk•1h ago
The interview which I've watched recently with Rich Sutton left me with the impression that AGI is not just a matter of adding more 9s.

The interviewer had an idea that he took for granted: that to understand language you have to have a model of the world. LLMs seem to udnerstand language therefore they've trained a model of the world. Sutton rejected the premise immediately. He might be right in being skeptical here.

sysguest•48m ago
yeah that "model of the world" would mean:

babies are already born with "the model of the world"

but a lot of experiments on babies/young kids tell otherwise

rwj•44m ago
Lots of experiments show that babies develop import capabilities at roughly the same times. That speaks to inherited abilities.
ben_w•37m ago
> babies are already born with "the model of the world"

> but a lot of experiments on babies/young kids tell otherwise

I believe they are born with such a model? It's just that model is one where mummy still has fur for the baby to cling on to? And where aged something like 5 to 8 it's somehow useful for us to build small enclosures to hide in, leading to a display of pillow forts in the modern world?

sysguest•4m ago
damn I guess I had to be more specific:

"LLM-level world-detail knowledge"

ekjhgkejhgk•28m ago
> yeah that "model of the world" would mean: babies are already born with "the model of the world"

No, not necessarily. Babies don't interact with the world only by reading what people wrote wikipedia and stackoverflow, like these models are trained. Babies do things to the world and observe what happens.

I imagine it's similar to the difference between a person sitting on a bicycle and trying to ride it, vs a person watching videos of people riding bicycles.

I think it would actually be a great experiment. If you take a person that never rode a bicycle in terms like and feed them video of people riding bicycles, and literature about bikes, fiction and non-fiction, at some point I'm sure they'll be able to talk about it like they have huge experience in riding bikes, but won't be able to ride one.

exe34•33m ago
To me, it's a matter of a very big checklist - you can keep adding tasks to the list, but if it keeps marching onwards checking things off your list, some day you will get there. whether it's a linear or asymptotic march, only time will tell.
cactusplant7374•27m ago
That's like saying that if we image every neuron in the brain we will understand thinking. We can build these huge databases and they tell us nothing about the process of thinking.
exe34•19m ago
What if we copy the functionality of every neuron? what if we simply copy all the skills that those neurons compute?
rootusrootus•9m ago
Do we even know the functionality of every neuron?
ekjhgkejhgk•23m ago
I don't know if you will get there, that's far from clear at this stage.

Did you see the recent video by Nick Beato [1] where he asks various models about a specific number? The models that get it right are the models that consume youtube videos, because there was a youtube video about that specific number. It's like, these models are capable of telling you about very similar things that they've seen, but they don't seem like they understand it. It's totally unclear whether this is a quantitative or qualitative gap.

[1] https://www.youtube.com/watch?v=TiwADS600Jc

godelski•14m ago

  > that to understand knowledge you have to have a model of the world.
You have a small but important mistake. It's to recite (or even apply) knowledge. To understand does actually require a world model.

Think of it this way: can you pass a test without understanding the test material? Certainly we all saw people we thought were idiots do well in class while we've also seen people we thought were geniuses fail. The test and understanding usually correlates but it's not perfect, right?

The reason I say understanding requires a world model (and I would not say LLMs understand) is because to understand you have to be able to detail things. Look at physics, or the far more detail oriented math. Physicists don't conclude things just off of experimental results. It's an important part, but not the whole story. They also write equations, ones which are counterfactual. You can call this compression if you want (I would and do), but it's only that because of the generalization. But it also only has that power because of the details and nuance.

With AI many of these people have been screaming for years (check my history) that what we're doing won't get us all the way there. Not because we want to stop the progress, but because we wanted to ensure continued and accelerate progress. We knew the limits and were saying "let's try to get ahead of this problem" but were told "that'll never be a problem. And if it is, we'll deal with it when we deal with it." It's why Chollet made the claim that LLMs have actually held AI progress back. Because the story that was sold was "AGI is solved, we just need to scale" (i.e. more money). I do still wonder how different things would be if those of us pushing back were able to continue and scale our works (research isn't free, so yes, people did stop us). We always had the math to show that scale wasn't enough, but it's easy to say "you don't need math" when you can see progress. The math never said no progress nor no acceleration, the math said there's a wall and it's easier to adjust now than when we're closer and moving faster. Sadly I don't think we'll ever shift the money over. We still evaluate success weirdly. Successful predictions don't matter. You're still heralded if you made a lot of money in VR and Bitcoin, right?

tyre•14m ago
There is actually some evidence from Anthropic that LLMs do model the world. This paper[0] tracing their "thought" is fascinating. Basically an LLM translating across languages will "light up" (to use a rough fMRI equivalent) for the same concepts (e.g. bigness) across languages.

It does have clusters of parameters that correlate with concepts, not just randomly "after X word tends to have Y word." Otherwise you would expect all of Chinese to be grouped in one place, all of French in another, all of English in another. This is empirically not the case.

I don't know whether to understand knowledge you have to have a model of the world, but at least as far as language, LLMs very much do seem to have modeling.

[0]: https://www.anthropic.com/research/tracing-thoughts-language...

manmal•7m ago
> Basically an LLM translating across languages will "light up" (to use a rough fMRI equivalent) for the same concepts (e.g. bigness) across languages

I thought that’s the basic premise of how transformers work - they encode concepts into high dimensional space, and similar concepts will be clustered together. I don’t think it models the world, but just the texts it ingested. It’s observation, not understanding.

SR2Z•4m ago
Right, but modeling the structure of language is a question of modeling word order and binding affinities. It's the Chinese Room thought experiment - can you get away with a form of "understanding" which is fundamentally incomplete but still produces reasonable outputs?

Language in itself attempts to model the world and the processes by which it changes. Knowing which parts-of-speech about sunrises appear together and where is not the same as understanding a sunrise - but you could make a very good case, for example, that understanding the same thing in poetry gets an LLM much closer.

jlas•1h ago
Notably the scaling law paper shows result graphs on log-scale
omidsa1•1h ago
I also quite like the way he puts it. However, from a certain point onward, the AI itself will contribute to the development—adding nines—and that’s the key difference between this analogy of nines in other systems (including earlier domain‑specific ML ones) and the path to AGI. That's why we can expect fast acceleration to take off within two years.
AnimalMuppet•51m ago
Isn't that one of the measures of when it becomes an AGI? So that doesn't help you with however many nines we are away from getting an AGI.

Even if you don't like that definition, you still have the question of how many nines we are away from having an AI that can contribute to its own development.

I don't think you know the answer to that. And therefore I think your "fast acceleration within two years" is unsupported, just wishful thinking. If you've got actual evidence, I would like to hear it.

scragz•37m ago
AGI is when it is general. a narrow AI trained only on coding and training AIs would contribute to the acceleration without being AGI itself.
ben_w•30m ago
AI has been helping with the development of AI ever since at least the first optimising compiler or formal logic circuit verification program.

Machine learning has been helping with the development of machine learning ever since hyper-parameter optimisers became a thing.

Transformers have been helping with the development of transformer models… I don't know exactly, but it was before ChatGPT came out.

None of the initials in AGI are booleans.

But I do agree that:

> "fast acceleration within two years" is unsupported, just wishful thinking

Nobody has any strong evidence of how close "it" is, or even a really good shared model of what "it" even is.

Yoric•33m ago
It's a possibility, but far from certainty.

If you look at it differently, assembly language may have been one nine, compilers may have been the next nine, successive generations of language until ${your favorite language} one more nine, and yet, they didn't get us noticeably closer to AGI.

breuleux•26m ago
I don't think we can be confident that this is how it works. It may very well be that our level of intelligence has a hard limit to how many nines we can add, and AGI just pushes the limit further, but doesn't make it faster per se.

It may also be that we're looking at this the wrong way altogether. If you compare the natural world with what humans have achieved, for instance, both things are qualitatively different, they have basically nothing to do with each other. Humanity isn't "adding nines" to what Nature was doing, we're just doing our own thing. Likewise, whatever "nines" AGI may be singularly good at adding may be in directions that are orthogonal to everything we've been doing.

Progress doesn't really go forward. It goes sideways.

rpcope1•22m ago
> However, from a certain point onward, the AI itself will contribute to the development—adding nines—and that’s the key difference between this analogy of nines in other systems (including earlier domain‑specific ML ones) and the path to AGI.

There's a massive planet-sized CITATION NEEDED here, otherwise that's weapons grade copium.

breve•55m ago
> There are still more nines to go.

Great. So what's the plan for refunding with interest the customers who were defrauded by a full self-driving demo that was not and still isn't the product they were promised (much less delivered) by Tesla?

How much did Karpathy personally profit from the lie he participated in?

wcoenen•35m ago
This is not exactly new information[1]. You may have a point that it was not presented to customers this way though.

[1] https://x.com/elonmusk/status/1382458022367162370

jakeydus•35m ago
You know what they say, a Silicon Valley 9 is a 10 anywhere else. Or something like that.
Yoric•31m ago
I assume you're describing the fact that Silicon Valley culture keeps pushing out products before they're fully baked?
tekbruh9000•35m ago
Infinitely big little numbers

Academia has rediscovered itself

Signal attenuation, a byproduct of entropy, due to generational churn means there's little guarantee.

Occam's Razor; Karpathy knows the future or he is self selecting biology trying to avoid manual labor?

His statements have more in common with Nostradamus. It's the toxic positivity form of "the end is nigh". It's "Heaven exists you just have to do this work to get there."

Physics always wins and statistics is not physics. Gamblers fallacy; improvement of statistical odds does not improve probability. Probability remains the same this is all promises of some people who have no idea or interest in doing anything else with their lives; so stay the course.

godelski•32m ago
It's a good way to think about lots of things. It's Pareto efficiency. The 80/20 rule

20% of your effort gets you 80% of the way. But most of your time is spent getting that last 20%. People often don't realize that this is fractal like in nature, as it draws from the power distribution. So of that 20% you still have left, the same holds true. 20% of your time (20% * 80% = 16% -> 36%) to get 80% (80% * 20% => 96%) again and again. The 80/20 numbers aren't actually realistic (or constant) but it's a decent guide.

It's also something tech has been struggling with lately. Move fast and break things is a great way to get most of the way there. But you also left a wake of destruction and tabled a million little things along the way. Someone needs to go back and clean things up. Someone needs to revisit those tabled things. While each thing might be little, we solve big problems by breaking them down into little ones. So each big problem is a sum of many little ones, meaning they shouldn't be quickly dismissed. And like the 9's analogy, 99.9% of the time is still 9hrs of downtime a year. It is still 1e6 cases out of 1e9. A million cases is not a small problem. Scale is great and has made our field amazing, but it is a double edged sword.

I think it's also something people struggle with. It's very easy to become above average, or even well above average at something. Just trying will often get you above average. It can make you feel like you know way more but the trap is that while in some domains above average is not far from mastery in other domains above average is closer to no skill than it is to mastery. Like how having $100m puts your wealth closer to a homeless person than a billionaire. At $100m you feel way closer to the billionaire because you're much further up than the person with nothing but the curve is exponential.

010101010101•9m ago
https://youtu.be/bpiu8UtQ-6E?si=ogmfFPbmLICoMvr3

"I'm closer to LeBron than you are to me."

red75prime•21m ago
The question is how many nines are humans.
theusus•1h ago
Andrej is most unreliable person to take as source. Last year he claimed to do vibe coding and now this.
jasonthorsness•1h ago
Andrej coined the term "vibe coding" in February on X, only 8 months ago.
guiomie•1h ago
He's also the guy behind FSD which is kinda turning into a scam.
lazystar•22m ago
> FSD which is a scam.

fixed that for you.

johnhamlin•1h ago
Did anyone here actually watch the video before commenting? I’m seeing all the same old opinions and no specific criticisms of anything Karpathy said here.
awongh•1h ago
Now that Nvidia is the most valuable company, all this talk of actual AGI will be washed away by the huge amount of dollars driving the hype train.

Most of these companies value is built on the idea of AGI being achievable in the near future.

AGI being too close or too far away affects the value of these companies- too close and it'll seem too likely that the current leaders will win. Too far away and the level of spending will seem unsustainable.

michaelt•29m ago
> Most of these companies value is built on the idea of AGI being achievable in the near future.

Is it? Or is it based on the idea a load of white collar workers will have their jobs automated, and companies will happily spend mid four figures for tech that replaces a worker earning mid five figures?

tootie•17m ago
Exactly. A 5-10 year timeline and you've got the formula for a new Space Race with China. Give us $7T or else China will control the world.

This 2024 story feels like ancient history that everyone has forgotten: https://www.cnbc.com/2024/02/09/openai-ceo-sam-altman-report...

zeroonetwothree•15m ago
It’s possible for AI to provide tremendous economic value without AGI
anon191928•10m ago
that is doubtful? sure it provides a lot of value but current levels are dotcom top level. Everyone knew internet had value but stocks push it too high
sarchertech•4m ago
AGI in the not too distant future is always priced in. Just providing tremendous economic value won’t make the stock prices keep going up.
sosodev•1h ago
I think it's a shame that a 146 minute podcast released ~55 minutes ago has so much discussion. Everybody here is clearly just reacting to the title with their own biases.

I know it's against the guidelines to discuss the state of a thread, but I really wish we could have thoughtful conversations about the content of links instead of title reactions.

mpalmer•1h ago
Be fair; plenty of people transcribe and read podcasts, and/or summarize/excerpt them.
j45•1h ago
Summaries are great, but can be surface.

The brain processes and has insights differently experiencing it at conversation speed.

We might get what the conversation was that others had, but it can miss the mark for the listening and inner processing that leads to it's own gifts.

It's not about one or the other for me, usually both.

tauchunfall•1h ago
there is a transcript, people can skim for interesting parts and read for 30 minutes and then comment.

edit: typo fix.

markbao•1h ago
Just as the core idea of a book can be (lossily) summarized in a few sentences, the core crux of an argument can be quite simple and not require wading though the whole discussion (the AGI discussion is only 30 minutes anyhow).

Granted, a bunch of commenters are probably doing what you’re saying.

jasonthorsness•1h ago
This one does have the full transcript underneath (wonderful feature). But it's a long read too so I think your assumption is correct :P.
jlhawn•1h ago
gotta listen at 2x speed!
therealmarv•1h ago
a very human reaction ;)
Yossarrian22•1h ago
Maybe they used AI to transcribe and summarize the podcast
meowface•1h ago
Eh, Dwarkesh has to market the podcasts somehow. I think it's fine for him to use hooks like this and for HN threads to respond to the hooks. 99% of HN threads only ever reply to the headline and that's not changing anytime soon. This will likely cause many people (including myself) to watch the full podcast when we otherwise might not have.

The criticism that people are only replying to a tiny portion of the argument is still valid, but sometimes it's more fun to have an open-ended discussion rather than address what's in the actual article/video.

fragmede•1h ago
Who listens to podcasts at 1x speed? That's unbearably slow!
ghaff•53m ago
I do. I'm really not a fan of sped-up audio in general. If I'm focused on speed I'd rather read/skim a transcript.
tootie•15m ago
Idk how this Dwarkesh Patel got so popular so fast. I'd never heard of him and he keeps popping up in my feeds.
goalieca•1h ago
I remember attending a lecture from a famous quantum computing researcher in 2003. He said that quantum computing is 15-20 years away and then he followed up by saying that if he told anyone it was further away then he wouldn't get funding!
Yoric•30m ago
And now (useful) quantum computing is 5 years away! Has been for a few years, too.
EA-3167•9m ago
It's an excellent time-frame that sounds imminent enough to draw interest (and funding), but is distant enough that you can delay the promised arrival a few times in the span of a career before retiring.

Fusion research lives and dies on this premise, ignoring the hard problems that require fundamental breakthroughs in areas such as materials science, in favor of touting arbitrary benchmarks that don't indicate real progress towards fusion as a source of power on the grid.

"Full self driving" is another example; your car won't be doing this, but companies will brag about limited roll-outs of niche cases in dry, flat, places that are easy to navigate.

netrap•1h ago
Wonder if it will end up like nuclear fusion.. just another decade away! :)
reenorap•1h ago
Are "agents" just programs that call into an LLM and based on the response, it will do something?
cootsnuck•1h ago
Kinda. It's just an LLM that performs function calling (i.e. the LLM "decides" when a function needs to be called for a task and passes the appropriate function name and arguments for that function based on its context). So yea an "agent" is that LLM doing all of that and then your program that actually executes the function accordingly.

That's an "agent" at its simplest -- a LLM able to derive from natural language when it is contextually appropriate to call out to external "tools" (i.e. functions).

rwaksmunski•1h ago
AGI is still a decade away, and always will be.
mkbelieve•1h ago
I don't understand how anyone can believe that we're near even a whiff of AGI when we barely understand what dreaming is, or how the human brain interacts with the quantum world. There are so many elements of human creativity that are still utterly hidden behind a wall that it makes me feel insane when an entire industry is convinced we're just magically going to have the answer soon.

The people heralding the emergence of AGI are doing little more than pushing Ponzi schemes along while simultaneously fueling vitriolic waves of hate and neo-luddism for a ground-breaking technology boom that could enhance everything about how we live our lives... if it doesn't get regulated into the ground due to the fear they're recklessly cooking up.

kovek•1h ago
There's many different definitions of "AGI" that people come up with, and some include dreaming, quantum world, creativity, and some do not.
throwaway-0001•26m ago
We Dont know how a horse works, but we got cars. Analogy doesn’t work.
observationist•1h ago
Kurzweil has been eerily right so far, and his timeline has AGI at 2029. When software can perform any unattended, self directed task (in principle) at least as well as any human over the sum total of all tasks that humans are capable of doing, we will have reached AGI.

Software can already write more text on any given subject better than a majority of humanity. It can arguably drive better across more contexts than all of humanity - any human driver over a billion miles of normal traffic will have more accidents than self driving AI over the same distance. Short stories, haikus, simple images, utility scripts, simple software, web design, music generation - all of these tasks are already superhuman.

Longer time horizons, realtime and continuous memory, a suite of metacognitive tasks, planning, synthesis of large bodies of disparate facts into novel theory, and a few other categories of tasks are currently out of reach, but some are nearly solved, and the list of things that humans can do better than AI gets shorter by the day. We're a few breakthroughs away, maybe even one big architectural leap, from having software that is capable (in principle) of doing anything humans can do.

I think AGI is going to be here faster than Kurzweil predicted, because he probably didn't take into consideration the enormous amount of money being spent on these efforts.

There has never been anything like this in history - in the last decade, over 5 trillion dollars has been spent on AI research and on technologies that support AI, like crypto mining datacenters that pivoted to AI, new power, water, data support, providing the infrastructure and foundation for the concerted efforts in research and development. There are tens of thousands of AI researchers, some of them working in private finance, some for academia, some doing military resarch, some doing open source, and a ton doing private sector research, of which an astonishing amount is getting published and shared.

In contrast, the entire world spent around 16 trillion dollars on world war II - all of the R&D and emergency projects and military logistics, humanitarian aid, and so on.

We have AI getting more resources and attention and humans involved in a singular development effort, pushing toward a radical transformation of the very concept of "labor" - while I think it might be a good thing if it is a decade away, even perpetually so until we have some reasonable plan for coping with it, I very much think we're going to see AGI within the very near future.

*When I say "in principle" I mean that given the appropriate form factor, access, or controls, the AI can do all the thinking, planning, and execution that a human could do, at least as well as any human. We will have places that we don't want robots or AI going, tasks reserved for humans, traditions, taboos, economics, and norms that dictate AI capabilities in practice, but there will be no legitimacy to the idea that an AI couldn't do a thing.

overgard•1h ago
Right in time for the year of the linux desktop.
nopinsight•1h ago
A definition of AGI: https://www.agidefinition.ai/

A new contribution by quite a few prominent authors. One of the better efforts at defining AGI *objectively*, rather than through indirect measures like economic impact.

I believe it is incomplete because the psychological theory it is based on is incomplete. It is definitely worth discussing though.

—-

In particular, creative problem solving in the strong sense, ie the ability to make cognitive leaps, and deep understanding of complex real-world physics such as the interactions between animate and inanimate entities are missing from this definition, among others.

kart23•48m ago
I don't know a single one of the "Social Science" items, and I'm pretty sure 90% of college educated people wouldn't know a single one either.
chrisweekly•21m ago
I agree it seems like a better-structured effort than many others. But its shortcomings go beyond a shallow and incomplete foundation in psychology. It also has basic errors in its execution, eg a "Geography" question about centripetal and centrifugal forces. Color me extremely skeptical.
cayleyh•1h ago
"decade" being the universal time frame for "I don't know" :D
ActorNightly•1h ago
Not a decade. More like a century, and that is if society figures itself out enough to do some engineering on a planetary scale, and quantum computing is viable.

Fundamentally, AGI requires 2 things.

First it needs to be able to operate without information, learning as it goes. The core kernel should be such that it doesn't have any sort of training on real world concepts, only general language parsing that it can use to map to some logic structure to be able to determine a plan of action. So for example, if you give the kernel the ability to send ethernet packets, it should eventually figure out how to talk tls to communicate with the modern web, even if that takes an insane amount of repetition.

The reason for this is that you want the kernel to be able to find its way through any arbitrarily complex problem space. Then as it has access to more data, whether real time, or in memory, it can be more and more efficient.

This part is solvable. After all, human brains do this. A single rack of Google TPUs is roughly the same petaflops as a human brain operating at max capacity if you assume neuron activation is a add-multiply and firing speed of 200 times/second, and humans don't use all of their brain all the time.

The second part that makes the intelligence general is the ability to simulate reality faster than reality. Life is imperative by nature, and there are processes with chaotic effects (human brains being one of them), that have no good mathematical approximations. As such, if an AGI can truly simulate a human brain to be able to predict behavior, it needs to do this at an approximation level that is good enough, but also fast enough to where it can predict your behavior before you exhibit it, with overhead in also running simulations in parallel and figuring out the best course of actions. So for a single brain, you are looking at probably a full 6 warehouses full of TPUs.

ctoth•25m ago
You want a "core kernel" with "general language parsing" but no training on real-world concepts.

Read that sentence again. Slowly.

What do you think "general language parsing" IS if not learned patterns from real-world data? You're literally describing a transformer and then saying we need to invent it.

And your TLS example is deranged. You want an agent to discover the TLS protocol by randomly sending ethernet packets? The combinatorial search space is so large this wouldn't happen before the sun explodes. This isn't intelligence! This is bruteforce with extra steps!

Transformers already ARE general algorithms with zero hardcoded linguistic knowledge. The architecture doesn't know what a noun is. It doesn't know what English is. It learns everything from data through gradient descent. That's the entire damn point.

You're saying we need to solve a problem that was already solved in 2017 while claiming it needs a century of quantum computing.

TheBlight•58m ago
I knew this once I heard OpenAI was going to get into the porn bot business. If you have AGI you don't need porn bots.
tasuki•44m ago
Why not?
gtirloni•40m ago
Most likely because you'll be filthy reach from selling AGI and won't need to go after secondary revenue sources.
jeffreygoesto•56m ago
Ah. The old "still close enough, so we can pretend you should pour your money over us, as we haven't identified the next hype yet" razzle dazzle...
arthurofbabylon•50m ago
Agency. If one studied the humanities they’d know how incredible a proposal “agentic” AI is. In the natural world, agency is a consequence of death: by dying, the feedback loop closes in a powerful way. The notion of casual agency (I’m thinking of Jensen Huang’s generative > agentic > robotic insistence) is bonkers. Some things are not easily speedrunned.

(I did listen to a sizable portion of this podcast while making risotto (stir stir stir), and the thought occurred to me: “am I becoming more stupid by listening to these pundits?” More generally, I feel like our internet content (and meta content (and meta meta content)) is getting absolutely too voluminous without the appropriate quality controls. Maybe we need more internet death.)

m3kw9•48m ago
Another AGI discussion without first defining what is AGI in their minds.
mwkaufma•45m ago
Ten years away, just like it was ten years ago and will be ten years from now.
cboyardee•36m ago
AGI ---> A Great Illusion!
rcarmo•32m ago
Hmm. Zeno's Paradox.

(I was in college during the first AI Winter, so... I can't help but think that the cycles are tighter but convergence isn't guaranteed.)

nadermx•30m ago
I bet you we are all wrong and some random person is going to vibe code himself into something none of us expected. I half kid, if none of you have see it, highly suggest https://karpathy.ai/zero-to-hero.html
dingnuts•21m ago
Then why didn't Karpathy vibe code this?

https://x.com/GaryMarcus/status/1978500888521068818

hackitup7•29m ago
All right everybody, back to the mines.
benzible•18m ago
What's his estimate of how far we are from a definition of AGI?
password54321•16m ago
Can perform out of distribution tasks at least around average human level performance.
moomoo11•16m ago
What about Super AGI?
mediumsmart•15m ago
Getting to AGI is not the problem. Finding the planet that it is going to run on will be.
anon191928•14m ago
Amazing that he speaks the truth even tho $trillions and his stock options? depends on it. He and Dennis H. deserves all the respect.
Handy-Man•9m ago
I mean Dennis is just another hype man now, irrespective of how important research they may be doing in the background.
superconduct123•9m ago
I always get an weird feeling when AI researchers and CS people start talking about comparisons between human brains and AI/computers

Why is there a presumption that we (as people who have only studied CS) know enough about biology/neuroscience/evolution to make these comparisons/parallels?

I don't mean this in a mean way but in the back of my head I'm thinking "...you realize you're listening to 2 CS majors talk about neuroscience"

jjulius•3m ago
>Why is there a presumption that we (as people who have only studied CS) know enough about biology/neuroscience/evolution to make these comparisons?

Hubris.

maqnius•9m ago
Well, no one really knows — maybe we're just putting a lot of effort into turning a lump of clay into pizza. It already looks confusingly similar; now it just needs to smell and taste like it.
aaroninsf•5m ago
I'm pretty content to say this may be true, but may well prove quite wrong.

Why? Because humans—including the smartest of us—are continuously prone to cognitive errors, and reasoning about the non-linear behavior of complex systems is a domain we are predictably and durably terrible at, even when we try to compensate.

Personally I consider the case of self-driving cars illustrative and a go-to reminder for me of my own very human failure in this case. I was quite sure that we could not have autonomous vehicles in dynamic messy urban areas without true AGI; and that FSD would in the fashion of the failed Tesla offering, emerge first in the much more constrained space of the highway system. Which would also benefit from federal regulation and coordination.

No Waymos have eaten SF, and their driving is increasingly nuanced; and last night a friend and very early adopter relayed a series of anecdotes about some of the strikingly nuanced interactions he'd been party to recently, including being in a car that was attacked late at night, and, how one did exactly the right thing when approached head-on in a narrow neighborhood street that required backing out. Etc.

That's just one example, and IMO we are only beginning to experience the benefits of "network effects" so popular in tails of singularity take-off.

Ten years is a very, very, very long time under current conditions. I have done neural networks since the mid-90s (academically: published, presented, etc.) and I have proven terrible in anticipating how quickly "things" will improve. I have now multiple times witnessed my predictions that X or Y would take "5-8" or "8-10" years or "too far out to tell," instead arrive within 3 years.

Karpathy is smart of course but he's no smarter in this domain than any of the rest of us.

Are scaled tuned transformers with tack-ons going to give us AGI in 18 months? "No" is a safe bet. Is no approach going to give us AGI inside of 5 years? That is absolutely a bet I would never make. Not even close.

dlcarrier•5m ago
Redefinitions aside, fully capable AI is right up there with commercially viable fusion power, cost effective quantum completing, and fully capable self-driving cars, as a technology that is quickly advancing yet always a decade or two away.

Scientists discover intercellular nanotubular communication system in brain

https://www.science.org/doi/10.1126/science.adr7403
163•marshfram•4h ago•85 comments

Claude Skills are awesome, maybe a bigger deal than MCP

https://simonwillison.net/2025/Oct/16/claude-skills/
113•weinzierl•1h ago•53 comments

Live Stream from the Namib Desert

https://bookofjoe2.blogspot.com/2025/10/live-stream-from-namib-desert.html
305•surprisetalk•7h ago•61 comments

Andrej Karpathy – AGI is still a decade away

https://www.dwarkesh.com/p/andrej-karpathy
235•ctoth•2h ago•246 comments

EVs are depreciating faster than gas-powered cars

https://restofworld.org/2025/ev-depreciation-blusmart-collapse/
180•belter•8h ago•438 comments

Ruby core team takes ownership of RubyGems and Bundler

https://www.ruby-lang.org/en/news/2025/10/17/rubygems-repository-transition/
474•sebiw•7h ago•250 comments

Meow.camera

https://meow.camera/
549•southwindcg•16h ago•180 comments

MIT physicists improve the precision of atomic clocks

https://news.mit.edu/2025/mit-physicists-improve-atomic-clocks-precision-1008
22•pykello•5d ago•10 comments

AI has a cargo cult problem

https://www.ft.com/content/f2025ac7-a71f-464f-a3a6-1e39c98612c7
117•cs702•3h ago•68 comments

OpenAI Needs $400B In The Next 12 Months

https://www.wheresyoured.at/openai400bn/
165•chilipepperhott•1h ago•117 comments

4Chan Lawyer publishes Ofcom correspondence

https://alecmuffett.com/article/117792
211•alecmuffett•12h ago•283 comments

I built an F5 QKview scanner for CISA ED 26-01

https://www.usenabla.com/blog/emergency-scanning-cisa-endpoint
11•jdbohrman•6h ago•0 comments

Migrating from AWS to Hetzner

https://digitalsociety.coop/posts/migrating-to-hetzner-cloud/
931•pingoo101010•9h ago•513 comments

Forgejo v13.0 Is Available

https://forgejo.org/2025-10-release-v13-0/
18•birdculture•55m ago•4 comments

Smithsonian Open Access Images

https://www.si.edu/openaccess
12•bookofjoe•3d ago•2 comments

Resizeable Bar Support on the Raspberry Pi

https://www.jeffgeerling.com/blog/2025/resizeable-bar-support-on-raspberry-pi
83•speckx•1w ago•25 comments

Exploring PostgreSQL 18's new UUIDv7 support

https://aiven.io/blog/exploring-postgresql-18-new-uuidv7-support
3•s4i•2d ago•0 comments

Let's write a macro in Rust

https://hackeryarn.com/post/rust-macros-1/
87•hackeryarn•1w ago•39 comments

Cartridge Chaos: The Official Nintendo Region Converter and More

https://nicole.express/2025/not-just-for-robert.html
16•zdw•5d ago•3 comments

The Rapper 50 Cent, Adjusted for Inflation

https://50centadjustedforinflation.com/
305•gaws•2h ago•92 comments

Trap the Critters with Paint

https://deepanwadhwa.github.io/freeze_trap/
36•deepanwadhwa•1w ago•17 comments

How I bypassed Amazon's Kindle web DRM

https://blog.pixelmelt.dev/kindle-web-drm/
1476•pixelmelt•23h ago•453 comments

Ask HN: How to stop an AWS bot sending 2B requests/month?

164•lgats•14h ago•94 comments

Stinkbug Leg Organ Hosts Symbiotic Fungi That Protect Eggs from Parasitic Wasps

https://bioengineer.org/stinkbug-leg-organ-hosts-symbiotic-fungi-that-protect-eggs-from-parasitic...
15•gmays•5h ago•6 comments

Dead or Alive creator Tomonobu Itagaki, 58 passes away

https://www.gamedeveloper.com/design/dead-or-alive-creator-tomonobu-itagaki-has-passed-away-at-58
60•corvad•3h ago•11 comments

Email bombs exploit lax authentication in Zendesk

https://krebsonsecurity.com/2025/10/email-bombs-exploit-lax-authentication-in-zendesk/
46•todsacerdoti•8h ago•11 comments

Read your way through Hà Nội

https://vietnamesetypography.com/samples/read-your-way-through-ha-noi/
68•jxmorris12•6d ago•57 comments

You did no fact checking, and I must scream

https://shkspr.mobi/blog/2025/10/i-have-no-facts-and-i-must-scream/
261•blenderob•5h ago•155 comments

GOG Has Had to Hire Private Investigators to Track Down IP Rights Holders

https://www.thegamer.com/gog-private-investigators-off-the-grid-ip-rights-holders/
13•haunter•50m ago•4 comments

Next steps for BPF support in the GNU toolchain

https://lwn.net/Articles/1039827/
100•signa11•16h ago•18 comments