frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Gemini 3 Deep Think

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/
165•tosh•1h ago
https://twitter.com/GoogleDeepMind/status/202198151040070909...

Comments

Metacelsus•1h ago
According to benchmarks in the announcement, healthily ahead of Claude 4.6. I guess they didn't test ChatGPT 5.3 though.

Google has definitely been pulling ahead in AI over the last few months. I've been using Gemini and finding it's better than the other models (especially for biology where it doesn't refuse to answer harmless questions).

simianwords•1h ago
The comparison should be with GPT 5.2 pro which has been used successfully to solve open math problems.
throwup238•1h ago
The general purpose ChatGpt 5.3 hasn’t been released yet, just 5.3-codex.
neilellis•54m ago
It's ahead in raw power but not in function. Like it's got the worlds fast engine but one gear! Trouble is some benchmarks only measure horse power.
NitpickLawyer•46m ago
> Trouble is some benchmarks only measure horse power.

IMO it's the other way around. Benchmarks only measure applied horse power on a set plane, with no friction and your elephant is a point sphere. Goog's models have always punched over what benchmarks said, in real world use @ high context. They don't focus on "agentic this" or "specialised that", but the raw models, with good guidance are workhorses. I don't know any other models where you can throw lots of docs at it and get proper context following and data extraction from wherever it's at to where you'd need it.

Davidzheng•44m ago
I gather that 4.6 strengths are in long context agentic workflows? At least over Gemini 3 pro preview, opus 4.6 seems to have a lot of advantages
verdverm•35m ago
It's a giant game of leapfrog, shift or stretch time out a bit and they all look equivalent
sigmar•1h ago
Here is the methodologies for all the benchmarks: https://storage.googleapis.com/deepmind-media/gemini/gemini_...

The arc-agi-2 score (84.6%) is from the semi-private eval set. If gemini-3-deepthink gets above 85% on the private eval set, it will be considered "solved"

>Submit a solution which scores 85% on the ARC-AGI-2 private evaluation set and win $700K. https://arcprize.org/guide#overview

gs17•1h ago
Interestingly, the title of that PDF calls it "Gemini 3.1 Pro". Guess that's dropping soon.
sigmar•1h ago
I looked at the file name but not the document title (specifically because I was wondering if this is 3.1). Good spot.

edit: they just removed the reference to "3.1" from the pdf

WarmWash•31m ago
The rumor was that 3.1 was today's drop
staticman2•28m ago
That's odd considering 3.0 is still labeled a "preview" release.
riku_iki•21m ago
> If gemini-3-deepthink gets above 85% on the private eval set, it will be considered "solved"

They never will do on private set, because it would mean its being leaked to google.

lukebechtel•1h ago
Arc-AGI-2: 84.6% (vs 68.8% for Opus 4.6)

Wow.

https://blog.google/innovation-and-ai/models-and-research/ge...

karmasimida•1h ago
It is over
baal80spam•1h ago
I for one welcome our new AI overlords.
mnicky•45m ago
Well, fair comparison would be with GPT-5.x Pro, which is the same class of a model as Gemini Deep Think.
nubg•45m ago
Weren't we barely scraping 1-10% on this with state of the art models a year ago and it was considered that this is the final boss, ie solve this and its almost AGI-like?

I ask because I cannot distinguish all the benchmarks by heart.

verdverm•37m ago
Here's a good thread over 1+ month, as each model comes out

https://bsky.app/profile/pekka.bsky.social/post/3meokmizvt22...

tl;dr - Pekka says Arc-AGI-2 is now toast as a benchmark

Aperocky•15m ago
If you look at the problem space it is easy to see why it's toast, maybe there's intelligence in there, but hardly general.
fishpham•36m ago
Yes, but benchmarks like this are often flawed because leading model labs frequently participate in 'benchmarkmaxxing' - ie improvements on ARC-AGI2 don't necessarily indicate similar improvements in other areas (though it does seem like this is a step function increase in intelligence for the Gemini line of models)
aeyes•13m ago
https://arcprize.org/leaderboard

$13.62 per task - so we need another 5-10 years for the price to run this to become reasonable?

But the real question is if they just fit the model to the benchmark.

saberience•7m ago
Arc-AGI (and Arc-AGI-2) is the most overhyped benchmark around though.

It's completely misnamed. It should be called useless visual puzzle benchmark 2.

It's a visual puzzle, making it way easier for humans than for models trained on text firstly. Secondly, it's not really that obvious or easy for humans to solve themselves!

So the idea that if an AI can solve "Arc-AGI" or "Arc-AGI-2" it's super smart or even "AGI" is frankly ridiculous. It's a puzzle that means nothing basically, other than the models can now solve "Arc-AGI"

simianwords•1h ago
OT but my intuition says that there’s a spectrum

- non thinking models

- thinking models

- best of N models like deep think an gpt pro

Each one is of a certain computational complexity. Simplifying a bit, I think they map to - linear, quadratic and n^3 respectively.

I think there are certain class of problems that can’t be solved without thinking because it necessarily involves writing in a scratchpad. And same for best of N which involves exploring.

Two open questions

1) what’s the higher level here, is there a 4th option?

2) can a sufficiently large non thinking model perform the same as a smaller thinking?

NitpickLawyer•54m ago
> best of N models like deep think an gpt pro

Yeah, these are made possible largely by better use at high context lengths. You also need a step that gathers all the Ns and selects the best ideas / parts and compiles the final output. Goog have been SotA at useful long context for a while now (since 2.5 I'd say). Many others have come with "1M context", but their usefulness after 100k-200k is iffy.

What's even more interesting than maj@n or best of n is pass@n. For a lot of applications youc an frame the question and search space such that pass@n is your success rate. Think security exploit finding. Or optimisation problems with quick checks (better algos, kernels, infra routing, etc). It doesn't matter how good your pass@1 or avg@n is, all you care is that you find more as you spend more time. Literally throwing money at the problem.

mnicky•49m ago
> can a sufficiently large non thinking model perform the same as a smaller thinking?

Models from Anthropic have always been excellent at this. See e.g. https://imgur.com/a/EwW9H6q (top-left Opus 4.6 is without thinking).

simianwords•44m ago
its interesting that opus 4.6 added a paramter to make it think extra hard.
syntaxing•1h ago
Why a Twitter post and not the official Google blog post… https://blog.google/innovation-and-ai/models-and-research/ge...
meetpateltech•1h ago
The official blog post was submitted earlier (https://news.ycombinator.com/item?id=46990637), but somehow this story ranked up quickly on the homepage.
verdverm•40m ago
@dang will often replace the post url & merge comments

HN guidelines prefer the original source over social posts linking to it.

aavci•39m ago
Agreed - blog post is more appropriate than a twitter post
dang•22m ago
Just normal randomness I suppose. I've put that URL at the top now, and included the submitted URL in the top text.
jonathanstrange•1h ago
Unfortunately, it's only available in the Ultra subscription if it's available at all.
xnx•1h ago
Google is absolutely running away with it. The greatest trick they ever pulled was letting people think they were behind.
dfdsf2•53m ago
Trick? Lol not a chance. Alphabet is a pure play tech firm that has to produce products to make the tech accessible. They really lack in the latter and this is visible when you see the interactions of their VP's. Luckily for them, if you start to create enough of a lead with the tech, you get many chances to sort out the product stuff.
dakolli•3m ago
You sound like Russ Hanneman from SV
amunozo•50m ago
Those black nazis in the first image model were a cause of inside trading.
neilellis•55m ago
Less than a year to destroy Arc-AGI-2 - wow.
Davidzheng•47m ago
I unironically believe that arc-agi-3 will have a introduction to solved time of 1 month
etyhhgfff•17m ago
The AGI bar has to be set even higher, yet again.
vessenes•51m ago
Not trained for agentic workflows yet unfortunately - this looks like it will be fantastic when they have an agent friendly one. Super exciting.
ramshanker•36m ago
Do we get any model architecture details like parameter size etc.? Few months back, we used to talk more on this, now it's mostly about model capabilities.
Davidzheng•33m ago
I'm honestly not sure what you mean? The frontier labs have kept arch as secrets since gpt3.5
sinuhe69•27m ago
I'm pretty certain that DeepMind (and all other labs) will try their frontier (and even private) models on First Proof [1].

And I wonder how Gemini Deep Think will fare. My guess is that it will get half the way on some problems. But we will have to take an absence as a failure, because nobody wants to publish a negative result, even though it's so important for scientific research.

[1] https://1stproof.org/

zozbot234•12m ago
The 1st proof original solutions are due to be published in about 24h, AIUI.
simonw•26m ago
The pelican riding a bicycle is excellent. I think it's the best I've seen.

https://simonwillison.net/2026/Feb/12/gemini-3-deep-think/

deron12•24m ago
It's worth noting that you mean excellent in terms of prior AI output. I'm pretty sure this wouldn't be considered excellent from a "human made art" perspective. In other words, it's still got a ways to go!
Manabu-eo•24m ago
How likely this problem is already on the training set by now?
verdverm•20m ago
I've heard it posited that the reason the frontier companies are frontier is because they have custom data and evals. This is what I would do too
throwup238•19m ago
For every combination of animal and vehicle? Very unlikely.

The beauty of this benchmark is that it takes all of two seconds to come up with your own unique one. A seahorse on a unicycle. A platypus flying a glider. A man’o’war piloting a Portuguese man of war. Whatever you want.

recursive•15m ago
No, not every combination. The question is about the specific combination of a pelican on a bicycle. It might be easy to come up with another test, but we're looking at the results from a particular one here.
zarzavat•16m ago
You can always ask for a tyrannosaurus driving a tank.
simonw•14m ago
If anyone trains a model on https://simonwillison.net/tags/pelican-riding-a-bicycle/ they're going to get some VERY weird looking pelicans.
throwup238•22m ago
The reflection of the sun in the water is completely wrong. LLMs are still useless. (/s)
saberience•6m ago
Do you have to still keep trying to bang on about this relentlessly?

It was sort of humorous for the maybe first 2 iterations, now it's tacky, cheesy, and just relentless self-promotion.

Again, like I said before, it's also a terrible benchmark.

okokwhatever•8m ago
I need to test the sketch creation a s a p. I need this in my life because learning to use Freecad is too difficult for a busy person like me (and frankly, also quite lazy)
sho_hn•6m ago
FWIW, the FreeCAD 1.1 nightlies are much easier and more intuitive to use due to the addition of many on-canvas gizmos.