frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Mercury: Ultra-Fast Language Models Based on Diffusion

https://arxiv.org/abs/2506.17298
224•PaulHoule•4h ago•74 comments

Launch HN: Morph (YC S23) – Apply AI code edits at 4,500 tokens/sec

63•bhaktatejas922•2h ago•32 comments

I used o3 to profile myself from my saved Pocket links

https://noperator.dev/posts/o3-pocket-profile/
150•noperator•4h ago•68 comments

Adding a feature because ChatGPT incorrectly thinks it exists

https://www.holovaty.com/writing/chatgpt-fake-feature/
221•adrianh•2h ago•58 comments

Show HN: Unlearning Comparator, a visual tool to compare machine unlearning

https://gnueaj.github.io/Machine-Unlearning-Comparator/
10•jaeunglee•58m ago•0 comments

Dyson, techno-centric design and social consumption

https://2earth.github.io/website/20250707.html
33•2earth•2h ago•24 comments

When Figma starts designing us

https://designsystems.international/ideas/when-figma-starts-designing-us/
128•bravomartin•1d ago•48 comments

François Chollet: The Arc Prize and How We Get to AGI [video]

https://www.youtube.com/watch?v=5QcCeSsNRks
112•sandslash•4d ago•88 comments

Show HN: Ossia score – a sequencer for audio-visual artists

https://github.com/ossia/score
3•jcelerier•7m ago•0 comments

tinymcp: Let LLMs control embedded devices via the Model Context Protocol

https://github.com/golioth/tinymcp
14•hasheddan•1h ago•1 comments

CPU-X: CPU-Z for Linux

https://thetumultuousunicornofdarkness.github.io/CPU-X/
28•nateb2022•3h ago•6 comments

The Era of Exploration

https://yidingjiang.github.io/blog/post/exploration/
18•jxmorris12•1h ago•2 comments

Solving Wordle with uv's dependency resolver

https://mildbyte.xyz/blog/solving-wordle-with-uv-dependency-resolver/
65•mildbyte•1d ago•4 comments

Bitchat – A decentralized messaging app that works over Bluetooth mesh networks

https://github.com/jackjackbits/bitchat
559•ananddtyagi•17h ago•250 comments

So you wanna build an aging company

https://www.librariesforthefuture.bio/p/is-this-aging
14•apsec112•2d ago•2 comments

Lightfastness Testing of Colored Pencils

https://sarahrenaeclark.com/lightfast-testing-pencils/
42•picture•2d ago•6 comments

Hymn to Babylon, missing for a millennium, has been discovered

https://phys.org/news/2025-07-hymn-babylon-millennium.html
106•wglb•3d ago•24 comments

Tuning the Prusa Core One

https://arachnoid.com/3D_Printing_Prusa_Core_One/
29•lutusp•2h ago•15 comments

SUS Lang: The SUS Hardware Description Language

https://sus-lang.org/
13•nateb2022•57m ago•0 comments

AI Cameras Change Driver Behavior at Intersections

https://spectrum.ieee.org/ai-intersection-monitoring
18•sohkamyung•4h ago•26 comments

Show HN: NYC Subway Simulator and Route Designer

https://buildmytransit.nyc
58•HeavenFox•3h ago•3 comments

Cpparinfer: A C++23 implementation of the parinfer algorithm

https://gitlab.com/w0utert/cpparinfer
35•tosh•4d ago•3 comments

Neanderthals operated prehistoric “fat factory” on German lakeshore

https://archaeologymag.com/2025/07/neanderthals-operated-fat-factory-125000-years-ago/
185•hilux•3d ago•126 comments

A non-anthropomorphized view of LLMs

http://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html
341•zdw•18h ago•307 comments

Show HN: I wrote a "web OS" based on the Apple Lisa's UI, with 1-bit graphics

https://alpha.lisagui.com/
437•ayaros•22h ago•125 comments

Show HN: Piano Trainer – Learn piano scales, chords and more using MIDI

https://github.com/ZaneH/piano-trainer
149•FinalDestiny•2d ago•45 comments

Show HN: Integrated System for Enhancing VIC Output

https://github.com/Bloodmosher/ISEVIC
4•bloodmosher•2h ago•0 comments

Anthropic cut up millions of used books, and downloaded 7M pirated ones – judge

https://www.businessinsider.com/anthropic-cut-pirated-millions-used-books-train-claude-copyright-2025-6
204•pyman•7h ago•253 comments

Why English doesn't use accents

https://www.deadlanguagesociety.com/p/why-english-doesnt-use-accents
252•sandbach•19h ago•389 comments

Showh HN: Microjax – JAX in two classes and six functions

https://github.com/joelburget/microjax
31•joelburget•3h ago•1 comments
Open in hackernews

François Chollet: The Arc Prize and How We Get to AGI [video]

https://www.youtube.com/watch?v=5QcCeSsNRks
112•sandslash•4d ago

Comments

qoez•4h ago
I feel like I'm the only one who isn't convinced getting a high score on the ARC eval test means we have AGI. It's mostly about pattern matching (and some of it ambiguous even for humans what the actual true response aught to be). It's like how in humans there's lots of different 'types' of intelligence, and just overfitting on IQ tests doesn't in my mind convince me a person is actually that smart.
avmich•4h ago
Roughly speaking, the job of a medical doctor is to diagnose the patient - and then, after the diagnosis is made, to apply the healing from the book, corresponding to the diagnosis.

The diagnosis is pattern matching (again, roughly). It kinda suggests that a lot of "intelligent" problems are focused on pattern matching, and (relatively straightforward) application of "previous experience". So, pattern matching can bring us a great deal towards AGI.

AnimalMuppet•4h ago
Pattern matching is instinct. (Or at least, instinct is a kind of pattern matching. And once you learn the patterns, pattern matching can become almost instinctual). And that's fine, for things that fit the pattern. But a human-level intelligence can also deal with problems for which there is no pattern. (I mean, not always successfully - finding a correct solution to a novel problem is difficult. But it is within the capability of at least some humans.)
yorwba•4h ago
I think the people behind the ARC Prize agree that getting a high score doesn't mean we have AGI. (They already updated the benchmark once to make it harder.) But an AGI should get a similarly high score as humans do. So current models that get very low scores are definitely not AGI, and likely quite far away from it.
whiplash451•4h ago
You're not the only one. ARC-AGI is a laudable effort, but its fundamental premise is indeed debatable:

"We argue that human cognition follows strictly the same pattern as human physical capabilities: both emerged as evolutionary solutions to specific problems in specific evironments" (from page 22 of On the Measure of Intelligence)

https://arxiv.org/pdf/1911.01547

Davidzheng•4h ago
But I believe that because of this "even edge" thing which people call of AI weakenesses being not necessarily same of humans, once we run out of these tests which AI is worse than humans it will actually in effect be very much superhuman. My main evidence for this is leela-zero the Go AI who struggled with ladders and some other aspects of Go play well into the superhuman regime (in go it's easier to see when it's superhuman bc you can have elos and play win-rates etc and there's less room for debates)
energy123•4h ago
https://en.m.wikipedia.org/wiki/AI_effect

But on a serious note, I don't think Chollet would disagree. ARC is a necessary but not sufficient condition, and he says that, despite the unfortunate attention-grabbing name choice of the benchmark. I like Chollet's view that we will know that AGI is here when we can't come up with new benchmarks that separate humans from AI.

loki_ikol•4h ago
Well for most, the next steps are probably towards removing the highly deterministic and discrete characteristics of current approaches (we certainly don't think in lock steps). Those have no measures. Even the creative aspect is undermined by those characteristics.
kubb•4h ago
AGI isn't defined anywhere, so it can be anything you want.
FrustratedMonky•3h ago
Yes. And a lot of humans also don't pass for having AGI.
mindcrime•10m ago
Oh, it's defined in lots of places. The problem is.. it's defined in lots of places!
oldge•4h ago
Today’s llms are fancy autocomplete but lack test time self learning or persistent drive. By contrast, an AGI would require: – A goal-generation mechanism (G) that can propose objectives without external prompts – A utility function (U) and policy π(a│s) enabling action selection and hierarchy formation over extended horizons – Stateful memory (M) + feedback integration to evaluate outcomes, revise plans, and execute real-world interventions autonomously Without G, U, π, and M operating llms remain reactive statistical predictors, not human level intelligence.
KoolKat23•4h ago
I'd say we're not far off.

Looking at the human side, it takes a while to actually learn something. If you've recently read something it remains in your "context window". You need to dream about it, to think about, to revisit and repeat until you actually learn it and "update your internal model". We need a mechanism for continuous weight updating.

Goal-generation is pretty much covered by your body constantly drip-feeding your brain various hormones "ongoing input prompts".

onemoresoop•3h ago
> I'd say we're not far off.

How are we not far off? How can LLMs generate goals and based on what?

NetRunnerSu•3h ago
Minimize prediction errors.
tsurba•2h ago
But are we close to doing that in real-time on any reasonably large model? I don’t think so.
FeepingCreature•2h ago
You just train it on the goal. Then it has that goal.

Alternately, you can train it on following a goal and then you have a system where you can specify a goal.

At sufficient scale, a model will already contain goal-following algorithms because those help predict the next token when the model is basetrained on goal-following entities, ie. humans. Goal-driven RL then brings those algorithms to prominence.

kelseyfrog•37m ago
How do you figure goal generation and supervised goal training are interchangeable?
NetRunnerSu•3h ago
Yes, you're right, that's what we're doing.

https://github.com/dmf-archive/PILF

NetRunnerSu•3h ago
In fact, there is no technical threshold anymore. As long as the theory is in place, you can see such AGI at most half a year. It will even be more energy efficient than the current dense models.

https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...

TheAceOfHearts•4h ago
Getting a high score on ARC doesn't mean we have AGI and Chollet has always said as much AFAIK, it's meant to push the AI research space in a positive direction. Being able to solve ARC problems is probably a pre-requisite to AGI. It's a directional push into the fog of war, with the claim being that we should explore that area because we expect it's relevant to building AGI.
lostphilosopher•1h ago
We don't really have a true test that means "if we pass this test we have AGI" but we have a variety of tests (like ARC) that we believe any true AGI would be able to pass. It's a "necessary but not sufficient" situation. Also ties directly to the challenge in defining what AGI really means. You see a lot of discussions of "moving the goal posts" around AGI, but as I see it we've never had goal posts, we've just got a bunch of lines we'd expect to cross before reaching them.
ummonk•13m ago
"Being able to solve ARC problems is probably a pre-requisite to AGI." - is it? Humans have general intelligence and most can't solve the harder ARC problems.
ben_w•4h ago
You're not alone in this; I expect us to have not yet enumerated all the things that we ourselves mean by "intelligence".

But conversely, not passing this test is a proof of not being as general as a human's intelligence.

NetRunnerSu•4h ago
Unfortunately, we did it. All that is left is to assemble the parts.

https://news.ycombinator.com/item?id=44488126

kypro•3h ago
I find the "what is intelligence?" discussion a little pointless if I'm honest. It's similar to asking a question like does it mean to be a "good person" and would we know whether an AI or person is really "good"?

While understanding why a person or AI is doing what it's doing can be important (perhaps specifically in safety contexts) at the end of the day all that's really going to matter to most people is the outcomes.

So if an AI can use what appears to be intelligence to solve general problems and can act in ways that are broadly good for society, whether or not it meets some philosophical definition of "intelligent" or "good" doesn't matter much – at least in most contexts.

That said, my own opinion on this is that the truth is likely in between. LLMs today seem extremely good at being glorified auto-completes, and I suspect most (95%+) of what they do is just recalling patterns in their weights. But unlike traditional auto-completes they do seem to have some ability to reason and solve truly novel problems. As it stands I'd argue that ability is fairly poor, but this might only represent 1-2% of what we use intelligence for.

If I were to guess why this is I suspect it's not that LLM architecture today is completely wrong, but that the way LLMs are trained means that in general knowledge recall is rewarded more than reasoning. This is similar to the trade-off we humans have with education – do you prioritise the acquisition of knowledge or critical thinking? Maybe believe critical thinking is more important and should be prioritised more, but I suspect for the vast majority of tasks we're interested in solving knowledge storage and recall is actually more important.

ben_w•19m ago
That's certainly a valid way of looking at their abilities at any given task — "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim".

But when the question is "are they going to more important to the economy than humans?", then they have to be good at basically everything a human can do, otherwise we just see a variant of Amdahl's law in action and the AI perform an arbitrary speed-up of n % of the economy while humans are needed for the remaining 100-n %.

I may be wrong, but it seems to me that the ARC prize is more about the latter.

OtomotO•4h ago
You're not alone in this, no.

My definition of AGI is the one I was brought up with, not an ever moving goal post (to the "easier" side).

And no, I also don't buy that we are just stochastic parrots.

But whatever. I've seen many hypes and if I don't die and the world doesn't go to shit, I'll see a few more in the next couple of decades

NetRunnerSu•4h ago
To pass Arc, you need a living model with sentient abilities, not the dead frog now.

https://news.ycombinator.com/item?id=44488126

nxobject•4h ago
I understand Chollet is transparent that the "branding" of the ARC-AGI-n suites is meant to be suggestive of its purpose, than substantial.

However, it does rub me the wrong way - as someone who's cynical of how branding can enable breathless AI hype by bad journalism. A hypothetical comparison would be labelling SHRDLU's (1968) performance on Block World planning tasks as "ARC-AGI-(-1)".[0]

A less loaded name like (bad strawman option) "ARC-VeryToughSymbolicReasoning" should capture how the ARC-AGI-n suite is genuinely and intrinsically very hard for current AIs, and what progress satisfactory performance on the benchmark suite would represent. Which Chollet has done, and has grounded him throughout! [1]

[0] https://en.wikipedia.org/wiki/SHRDLU [1] https://arxiv.org/abs/1911.01547

heymijo•3h ago
I get what you're saying about perception being reality and that ARC-AGI suggests beating it means AGI has been achieved.

In practice when I have seen ARC brought up, it has more nuance than any of the other benchmarks.

Unlike, Humanity's Last Exam, which is the most egregious example I have seen in naming and when it is referenced in terms of an LLMs capability.

maaaaattttt•4h ago
I've said this somewhere else, but we have the perfect test for AGI in the form of any open world game. Give the instructions to the AGI that it should finish the game and how to control it. Give the frames as input and wait. When I think of the latest Zelda games and especially how the Shrine chanllenges are desgined they especially feel like the perfect environement for an AGI test.
Lerc•3h ago
And if someone makes a machine that does all that and another person says

"That's not really AGI because xyz"

What then? The difficulty in coming up with a test for AGI is coming up with something that people will accept a passing grade as AGI.

In many respects I feel like all of the claims that models don't really understand or have internal representation or whatever tend to lean on nebulous or circular definitions of the properties in question. Trying to pin the arguments down usually end up with dualism and/or religion.

Doing what Chollet has done is infinitely better, if a person can easily do something and a model cannot then there is clearly something significant missing

It doesn't matter what the property is or what it is called. Such tests might even help us see what those properties are.

Anyone who wants to claim the fundamental inability of these models should be able to provide a task that it is clearly possible to tell when it has been succeeded, and to show that humans can do it (if that's the bar we are claiming can't be met). If they are right, then no future model should be able to solve that class of problems.

maaaaattttt•3h ago
Given your premise (which I agree with) I think the issue in general comes from the lack of a good, broadly accepted definition of what AGI is. My initial comment originates from the fact that in my internal definition, an AGI would have a de facto understanding of the physics of "our world". Or better, could infer them by trial and error. But, indeed, it doesn't have to be the case. (The other advantage of the Zelda games is that they introduce new abilities that don't exist in our world, and for which most children -I've seen- understand the mechanisms and how they could be applied to solve a problem quite naturaly even they've never had that ability before).
wat10000•1h ago
I'd say the issue is the lack of a good, broadly accepted definition of what I is. We all know "smart" when we see it, but actually defining it in a rigorous way is tough.
jcranmer•2h ago
> The difficulty in coming up with a test for AGI is coming up with something that people will accept a passing grade as AGI.

The difficulty with intelligence is we don't even know what it is in the first place (in a psychology sense, we don't even have a reliable model of anything that corresponds to what humans point at and call intelligence; IQ and g are really poor substitutes).

Add into that Goodhart's Law (essentially, propose a test as a metric for something, and people will optimize for the test rather than what the test is trying to measure), and it's really no surprise that there's no test for AGI.

bonoboTP•1h ago
> It doesn't matter what the property is or what it is called. Such tests might even help us see what those properties are.

This is a very good point and somewhat novel to me in its explicitness.

There's no reason to think that we already have the concepts and terminology to point out the gaps between the current state and human-level intelligence and beyond. It's incredibly naive to think we have armchair-generated already those concepts by pure self-reflection and philosophizing. This is obvious in fields like physics. Experiments were necessary to even come up with the basic concepts of electromagnetism or relativity or quantum mechanics.

I think the reason is that pure philosophizing is still more prestigious than getting down in the weeds and dirt and doing limited-scope well-defined experiments on concrete things. So people feel smart by wielding poorly defined concepts like "understanding" or "reasoning" or "thinking", contrasting it with "mere pattern matching", a bit like the stalemate that philosophy as a field often hits, as opposed to the more pragmatic approach in the sciences, where empirical contact with reality allows more consensus and clarity without getting caught up in mere semantics.

davidclark•3h ago
In the video, François Chollet, creator of the ARC benchmarks, says that beating ARC does not equate to AGI. He specifically says they will be able to be beaten without AGI.
cainxinth•3h ago
> It's mostly about pattern matching...

For all we know, human intelligence is just an emergent property of really good pattern matching.

cttet•3h ago
The point is not that having a high score -> AGI, their ideas are more of having a low score -> we don't have AGI yet.
CamperBob2•2h ago
If you can write code to solve ARC by "overfitting," then give it a shot! There's prize money to be won, as long as your model does a good job on the hidden test set. Zuckerberg is said to be throwing around 8-figure signing bonuses for talent like that.

But then, I guess it wouldn't be "overfitting" after all, would it?

gonzobonzo•2h ago
I agree with you but I'll go a step further - these benchmarks are a good example of how far we are from AGI.

A good base test would be to give a manager a mixed team of remote workers, half being human and half being AI, and seeing if the manager or any of the coworkers would be able to tell the difference. We wouldn't be able to say that AI that passed that test would necessarily be AGI, since we would have to test it in other situations. But we could say that AI that couldn't pass that test wouldn't qualify, since it wouldn't be able to successfully accomplish some tasks that humans are able to.

But of course, current AI is nowhere near that level yet. We're left with benchmarks, because we all know how far away we are from actual AGI.

criddell•2h ago
The AGI test I think makes sense is to put it in a robot body and let it navigate the world. Can I take the robot to my back yard and have it weed my vegetable garden? Can I show it how to fold my laundry? Can I take it to the grocery store and tell it "go pick up 4 yellow bananas and two avocados that will be ready to eat in the next day or two, and then meet me in dairy"? Can I ask it to dice an onion for me during meal prep?

These are all things my kids would do when they were pretty young.

gonzobonzo•1h ago
I agree, I think of that as the next level beyond the digital assistant test - a physical assistant test. Once there are sufficiently capable robots, hook one up to the AI. Tell it to mow your lawn, drive your car to the mechanic and have the mechanic to get checked, box up an item, take it to the post office, and have it shiped, pick up your dry cleaning, buy ingredients from a grocery store, cook dinner, etc. Basic tasks an low-skilled worker would do as someone's assistant.
godshatter•2h ago
The problem with "spot the difference" tests, imho, is that I would expect an AGI to be easily spotted. There's going to be a speed of calculation difference, at the very least. If nothing else, typing speed would be completely different unless the AGI is supposed to be deceptive. Who knows what it's personality would be like. I'd say it's a simple enough test just to see if an AGI could be hired as, for example, an entry level software developer and keep it's job based on the same criteria base-level humans have to meet.

I agree that current AI is nowhere near that level yet. If AI isn't even trying to extract meaning from the words it smiths or the pictures it diffuses then it's nothing more than a cute (albeit useful) parlor trick.

SubiculumCode•2h ago
[1]https://app.rescript.info/public/share/W_T7E1OC2Wj49ccqlIOOz...

Perhaps it's because the representations are fractured. The link above is to the transcript of an episode of Machine Learning Street Talk with Kenneth O. Stanleyabout The Fractured Entangled Representation Hypothesis[1]

crazylogger•1h ago
I think next year's AI benchmarks are going to be like this project: https://www.anthropic.com/research/project-vend-1

Give the AI tools and let it do real stuff in the world:

"FounderBench": Ask the AI to build a successful business, whatever that business may be - the AI decides. Maybe try to get funded by YC - hiring a human presenter for Demo Day is allowed. They will be graded on profit / loss, and valuation.

Testing plain LLM on whiteboard-style question is meaningless now. Going forward, it will all be multi-agent systems with computer use, long-term memory & goals, and delegation.

mindcrime•13m ago
> I feel like I'm the only one who isn't convinced getting a high score on the ARC eval test means we have AGI.

Wait, what? Approximately nobody is claiming that "getting a high score on the ARC eval test means we have AGI". It's a useful eval for measuring progress along the way, but I don't think anybody considers it the final word.

hackinthebochs•4h ago
Has Chollet ever talked about his change of heart regarding AGI? It wasn't that long ago when he was one of the loudest voices decrying even the concept of AGI, let alone us being on the path to creating it. Now he's an advocate and has his own prize dataset? Seems rather convenient to change your tune once hundreds of billions are being thrown at AGI (not that I would blame him).
zamderax•4h ago
People are allowed to evolve opinions. It seems to me he believes that a combination of transformer and program synthesis are key. The big unknown at the moment is how to do program search.
hackinthebochs•4h ago
Absolutely. Presumably there is some specific considerations or evidence that helped him evolve his opinion. I would be interested in seeing a writeup about it. With him having been a very public advocate against AGI, a writeup of his evolution seems appropriate and would be very edifying for a lot of people.
blibble•4h ago
> Presumably there is some specific considerations or evidence that helped him evolve his opinion.

suitcases full of money?

cubefox•4h ago
ARC-AGI was introduced in 2019:

https://arxiv.org/abs/1911.01547

GPT-3 didn't come out until 2020.

hackinthebochs•3h ago
In my view that just makes his evolution more interesting as it wasn't just a matter of being wow'ed by what ChatGPT could do.
0xCE0•3h ago
He has recently co-founded NDEA company, so he has to align himself for that. Same kind of vibe change feels for Joscha Bach after having some position in Liquid AI company. Communication is not so relaxed anymore.

That said, I'd still listen these two guys (+ Schmidhuber) more than any other AI-guy.

roenxi•4h ago
By both definitions of intelligence in the presentation we should be saying "how we got to AGI" in the past tense. We're already there. AI's can deal with situations they weren't prepared for in any sense that a human can. They might not do well, but they'll have a crack at it. We can trivially build systems that collect data and do a bit more offline training if that is what someone wants to see, but there doesn't really seem to be a commercial need for that right now. Similarly, AIs can whip most humans at most domains that require intelligence.

I think the debate hqas been flat-footed by the speed all this happened. We're not talking AGI any more, we're talking about how to build superintelligences hitherto unseen in nature.

cubefox•4h ago
Well, there is also robotics, active inference, online learning, etc. Things animals can do well.
AIPedant•2h ago
Current robots perform very badly on my patented and highly scientific ROACH-AGI benchmark - "is this thing smarter at navigating unfamiliar 3D spaces than a cockroach?"
tmvphil•3h ago
According to this presentation at least, ARC-AGI-2 shows that there is a big meaningful gap in fluid intelligence between normal non-genius humans and the best models currently, which seems to indicate we are not "already there".
saberience•2h ago
There's already a big meaningful gap between the things AIs can do which humans can't, so why do you only count as "meaningful" the things humans can do which AIs can't?

I enjoy seeing people repeatedly move the goalposts for "intelligence" as AIs simply get smarter and smarter every week. Soon AI will have to beat Einstein in Physics, Usain Bolt in running, and Steve Jobs in marketing to be considered AGI...

tmvphil•50m ago
> There's already a big meaningful gap between the things AIs can do which humans can't, so why do you only count as "meaningful" the things humans can do which AIs can't?

Where did I say there was nothing meaningful about current capabilities? I'm saying that's what is novel about a claim of "AGI" (as opposed to a claim of "computer does something better than humans", which has been an obviously true statement since the ENIAC) is the ability to do at some level everything a normal human intelligence can do.

TheAceOfHearts•4h ago
The first highlight from this video is getting to see a preview of the next ARC dataset. Otherwise it feels like most of what Chollet says here has already been repeated in his other podcast appearances and videos. It's a good video if you're not familiarized with his work, but if you've seen some of his recent interviews then you can probably skip the first 20 minutes.

The second highlight from this video is the section from 29 minutes onward, where he talks about designing systems that can build up rich libraries of abstractions which can be applied to new problems. I wish he had lingered more on exploring and explaining this approach, but maybe they're trying to keep a bit of secret sauce because it's what his company is actively working on.

One of the major points which seems to be emerging from recent AI discourse is that the ability to integrate continuous learning seems like it'll be a key element in building AGI. Context is fine for short tasks, but if lessons are never preserved you're severely capped with how far the system can go.

vixen99•4h ago
Is the text available for those who don't hear so well?
jasonlotito•3h ago
At the very least, YouTube provides a transcript and a "Show Transcript" button in the video description, which you can click on to follow along.
heymijo•2h ago
When I watched the video I had the subtitles on. The automatic transcript is pretty good. "Test-time" which is used frequently gets translated as "Tesla" so watch out for that.
saberience•4h ago
The Arc prize/benchmark is a terrible judge of whether we got to AGI.

If we assume that humans have "general intelligence", we would assume all humans could ace Arc... but they can't. Try asking your average person, i.e. supermarket workers, gas station attendants etc to do the Arc puzzles, they will do poorly, especially on the newer ones, but AI has to do perfectly to prove they have general intelligence? (not trying to throw shade here but the reality is this test is more like an IQ test than an AGI test).

Arc is a great example of AI researchers moving the goal posts for what we consider intelligent.

Let's get real, Claude Opus is smarter than 99% of people right now, and I would trust its decision making over 99% of people I know in most situations, except perhaps emotion driven ones.

Arc agi benchmark is just a gimmick. Also, since it's a visual test and the current models are text based it's actually a rigged (against the AI models) test anyway, since their datasets were completely text based.

Basically, it's a test of some kind, but it doesn't mean quite as much as Chollet thinks it means.

leumon•3h ago
He said in the video that they tested regular people (uber driver, etc.) on arc-agi2 and at least 2 people were able to solve each task (an average of 9-10 people saw each task). Also this quote from the paper: None of the self-reported demographic factors recorded for all participants—including occupation, industry, technical experience, programming proficiency, mathematical background, puzzle-solving aptitude, and var- ious other measured attributes—demonstrated clear, statistically significant relationships with performance outcomes. This finding suggests that ARC-AGI-2 tasks assess general problem-solving capabilities rather than domain-specific knowledge or specialized skills acquired through particular professional or educational experiences.
daveguy•3h ago
It is not a judge of whether we got to AGI. And literally no one except straw-manning critics are trying to claim it is. The point is, an AGI should easily be able to pass it. But it can obviously be passed without getting to AGI (as . It's a necessary but not sufficient criteria. If something can't pass a test as simple as AGI (which no AI currently can) then it's definitely not AGI. Anyone claiming AGI should be able to point their AI at the problem and have an 80+% solution rate. Current attempts on the second ARC are less than 10% with zero shot attempts even worse. Even the better performing LLMs on the first ARC couldn't do well without significant pre-training. In short, the G in AGI stands for general.
saberience•1h ago
So do you agree that a human that CANNOT solve ARC doesn't have general intelligence?

If we think humans have "GI" then I think we have AIs right now with "GI" too. Just like humans do, AIs spike in various directions. They are amazing at some things and weak at visual/IQ test type problems like ARC.

adamgordonbell•1h ago
It's a good question. But only complicated answers are possible. A puppy and crow and a raccoon all have intelligence but certainly can't all pass the ARC challenge.

I think the charitable interpretation is that, if intelligence is made up of many skills, and AIs are super human at some, like image recognition.

And that therefore, future efforts need to be on the areas where AIs are significantly less skilled. And also, since they are good at memorizing things, knowledge questions are the wrong direction and anything most humans could solve but that AIs can not, especially if as generic as pattern matching, should be an important target.

cttet•3h ago
Maybe it is a cultural difference aspect, but I feel that "supermarket workers, gas station attendants" (in an Asian country) that I know of should be quite capable of most ARC tasks.
profchemai•2h ago
Out of 100 of evals, ARC is a very distinct and unique eval, most frontier models are also visual now, don't see the harm in having this instead of another text eval.
Workaccount2•2h ago
This is what is called "spikey" intelligence, where a model might be able to crack phd physics problems and solve byzantine pattern matching games at the 90th percentile, but also can't figure out how to look up a company and copy their address on the "customer" line of an invoice.
chromaton•3h ago
Current AI systems don't have a great ability to take instructions or information about the state of the world and produce new output based upon that. Benchmarks that emphasize this ability help greatly in progress toward AGI.
jacquesm•2h ago
Let's not. Seriously. I absolutely love François and have used his work extensively. But looking around me at the social impact of AI I am really not convinced that this is what the world needs right now and that if we can stave off the turning point for another decade or two that humanity will likely benefit from that. The last thing we need is to inject yet another instability into a planet that is already fighting existential crisis on a number of fronts.
thatguy0900•2h ago
It doesn't matter what should or should not happen. Technology will continue to race forward at breakneck speed while everyone involved pats each other on the back for making a bunch of money before the consequences hit
nessbot•2h ago
technology doesn't just advance itself
lo_zamoyski•2h ago
This is true. We have a choice...in principle.

But in practice, it's like stopping an arms race.

bnchrch•2h ago
No, but one thing is certain, in large human systems you can only redirect greed, you can't stop it.
alex_duf•1h ago
If the incentive is there, the technology will advance. I hear "we need to slow down the progress of technology", but that's misunderstanding _why_ it progresses. I'm assuming the slow down camp really need to look into what's the incentive to slow down.

Personally I don't think it's possible at this stage. The cat's out of the bag (this new class of tools are working) the economic incentive is way too strong.

modeless•2h ago
ARC-AGI 3 remindes me of PuzzleScript games: https://www.puzzlescript.net/Gallery/index.html

There are dozens of ready-made, well-designed, and very creative games there. All are tile-based and solved with only arrow keys and a single action button. Maybe someone should make a PuzzleScript AGI benchmark?

mNovak•12m ago
This game is great!

https://nebu-soku.itch.io/golfshall-we-golf

Maybe someone can make an MCP connection for the AIs to practice. But I think the idea of the benchmark is to reserve some puzzles for private evaluation, so that they're not in the training data.

visarga•1h ago
I think intelligence is search. Search is exploration and learning. So intelligence is not in the model, or in the environment, but in their mutual dance. A river is not the banks, nor the water, but their relation.
visarga•1h ago
I think intelligence is search. Search is exploration + learning. So intelligence is not in the model or in the environment, but in their mutual dance. A river is not the banks, nor the water, but their relation. ARC is just a frozen snapshot of the banks, not the dynamic environment we have.
bogtog•48m ago
I wonder how much slow progress on ARC can be explained by their visual properties making them easy for humans but hard for LLMs.

My impression is that models are pretty bad at interpreting grids of characters. Yesterday, I was trying to get Claude to convert a message into a cipher where it converted a 98-character string into 7x14 grid where the sequential letters moved 2-right and 1-down (i.e., like a knight it chess). Claude seriously struggled.

Yet, Francois always pumps up the "fluid intelligence" component of this test and emphasizes how easy these are for humans. Yet, humans would presumably be terrible at the tasks if they looked at it character-by-character

This feels like a somewhat similar (intuition-lie?) case as the Apple paper showing how reasoning model's can't do tower of hanoi past 10+ disks. Readers will intuitively think about how they themselves could tediously do an infinitely long tower of hanoi, which is what the paper is trying to allude to. However, the more appropriate analogy would be writing out all >1000 moves on a piece of paper at once and being 100% correct, which is obviously much harder

ltbarcly3•9m ago
There is some kind of massive brigading happening on this thread. Lots of thoughtful comments are downmodded or flagged (including mine, which I thought was pretty thoughtful. I even said poop instead of shit.).

https://news.ycombinator.com/item?id=44492241

My comment was basically instantly flagged. I see at least 3 other flagged comments that I can't imagine deserve to be flagged.

acegod•4m ago
While I think that François Chollet is a genius for coming up with ARC-AGI, I'm not convinced that its a great LLM benchmark.

LLMs are fundamentally text-based. The majority of their training is text-based. The majority of their usage is text-based. And a very large majority of their output is text-based. So it seems somewhat bizzare to perform general evaluation of these models using what are effectively image-centric tests.

Evaluating LLM visual skills and reasoning is a very important and reasonable thing to do. And I believe that there are an infinite number of ways to evaluate LLMs and general intelligence and that visual tests are a viable approach. But I personally feel that the mismatch between the core design of LLMs and the evaluation framework of ARC-AGI is simply too large to ignore.

I have a (draft) blog post on this subject that I copied some of my comment here from https://www.xent.tech/blog/problems-in-llm-benchmarking-and-...

Fun piece of trivia: François Chollet's "On the Measure of Intelligence" was released on November 5, 2019, the exact same day that the full GPT-2 model was released