frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Gemini 3

https://blog.google/products/gemini/gemini-3/
238•meetpateltech•1h ago

Comments

denysvitali•1h ago
Finally!
thedelanyo•1h ago
Reading the introductory passage - all I can say now is, Ai is here to stay.
meetpateltech•1h ago
DeepMind page: https://deepmind.google/models/gemini/

Gemini 3 Pro DeepMind Page: https://deepmind.google/models/gemini/pro/

Developer blog: https://blog.google/technology/developers/gemini-3-developer...

Gemini 3 Docs: https://ai.google.dev/gemini-api/docs/gemini-3

Google Antigravity: https://antigravity.google/

svantana•1h ago
Grok got to hold the top spot of LMArena-text for all of ~24 hours, good for them [1]. With stylecontrol enabled, that is. Without stylecontrol, gemini held the fort.

[1] https://lmarena.ai/leaderboard/text

inkysigma•36m ago
Is it just me or is that link broken because of the cloudflare outage?

Edit: nvm it looks to be up for me again

bnchrch•1h ago
I've been so happy to see Google wake up.

Many can point to a long history of killed products and soured opinions but you can't deny theyve been the great balancing force (often for good) in the industry.

- Gmail vs Outlook

- Drive vs Word

- Android vs iOS

- Worklife balance and high pay vs the low salary grind of before.

Theyve done heaps for the industry. Im glad to see signs of life. Particularly in their P/E which was unjustly low for awhile.

digbybk•1h ago
Ironically, OpenAI was conceived as a way to balance Google's dominance in AI.
dragonwriter•29m ago
I thought it was a workaround to Google's complete disinterest in productizing the AI research it was doing and publishing, rather than a way to balance their dominance in a market which didn't meaningfully exist.
mattnewton•11m ago
That’s how it turned out, but no, at the time of open AI’s founding, “AI” was search and RL which Google and deep mind were dominating, and self driving, which Waymo was leading. And OpenAI was conceptualized as a research org to compete.
63stack•1h ago
- Making money vs general computing
rvz•55m ago
Google always has been there, its just that many didn't realize that DeepMind even existed and I said that they needed to be put to commercial use years ago. [0] and Google AI != DeepMind.

You are now seeing their valuation finally adjusting to that fact all thanks to DeepMind finally being put to use.

[0] https://news.ycombinator.com/item?id=34713073

ThrowawayR2•48m ago
They've poisoned the internet with their monopoly on advertising, the air pollution of the online world, which is an transgression that far outweighs any good they might have done. Much of the negative social effects of the online world come from the need to drive more screen time, more engagement, more clicks, and more ad impressions firehosed into the faces of users for sweet, sweet, advertiser money. When Google finally defeats ad-blocking, yt-dlp, etc., remember this.
visarga•34m ago
Yes, this is correct, and it happens everywhere. App Store, Play Store, YouTube, Meta, X, Amazon and even Uber - they all play in two-sided markets exploiting both its users and providers at the same time.
qweiopqweiop•35m ago
Forgot to mention absolutely milking every ounce of their users attention with Youtube, plus forcing Shorts!
drewda•14m ago
For what it's worth, most of those examples are acquisitions. That's not a hit against Google in particular. That's the way all big tech co's grow. But it's not necessarily representative of "innovation."
charcircuit•7m ago
>most of those examples are acquisitions

Taking those products from where there were to the juggernauts they are today was not guaranteed to succeed, nor was it easy. And yes plenty of innovation happened with these products post aquisition.

icyfox•1h ago
Pretty happy the under 200k token pricing is staying in the same ballpark as Gemini 2.5 Pro:

Input: $1.25 -> $2.00 (1M tokens)

Output: $10.00 -> $12.00

Squeezes a bit more margin out of app layer companies, certainly, but there's a good chance that for tasks that really require a sota model it can be more than justified.

rudedogg•48m ago
Every recent release has bumped the pricing significantly. If I was building a product and my margins weren’t incredible I’d be concerned. The input price almost doubled with this one.
icyfox•8m ago
I'm not sure how concerned people should be at the trend lines. If you're building a product that already works well, you shouldn't feel the need to upgrade to a larger parameter model. If your product doesn't work and the new architectures unlock performance that would let you have a feasible business, even a 2x on input tokens shouldn't be the dealbreaker.

If we're paying more for a more petaflop heavy model, it makes sense that costs would go up. What really would concern me is if companies start ratcheting prices up for models with the same level of performance. My hope is raw hardware costs and OSS releases keep a lid on the margin pressure.

gertrunde•1h ago
"AI Overviews now have 2 billion users every month."

"Users"? Or people that get presented with it and ignore it?

singhrac•47m ago
They're a bit less bad than they used to be. I'm not exactly happy about what this means to incentives (and rewards) for doing research and writing good content, but sometimes I ask a dumb question out of curiosity and Google overview will give it to me (e.g. "what's in flower food?"). I don't need GPT 5.1 Thinking for that.
recitedropper•46m ago
"Since then, it’s been incredible to see how much people love it. AI Overviews now have 2 billion users every month."

Cringe. To get to 2 billion a month they must be counting anyone who sees an AI overview as a user. They should just go ahead and claim the "most quickly adopted product in history" as well.

rvz•1h ago
I expect almost no-one to read the Gemini 3 model card. But here is a damning excerpt from the early leaked model card from [0]:

> The training dataset also includes: publicly available datasets that are readily downloadable; data obtained by crawlers; licensed data obtained via commercial licensing agreements; user data (i.e., data collected from users of Google products and services to train AI models, along with user interactions with the model) in accordance with Google’s relevant terms of service, privacy policy, service-specific policies, and pursuant to user controls, where appropriate; other datasets that Google acquires or generates in the course of its business operations, or directly from its workforce; and AI-generated synthetic data.

So your Gmails are being read by Gemini and is being put on the training set for future models. Oh dear and Google is being sued over using Gemini for analyzing user's data which potentially includes Gmails by default.

Where is the outrage?

[0] https://web.archive.org/web/20251118111103/https://storage.g...

[1] https://www.yahoo.com/news/articles/google-sued-over-gemini-...

stefs•54m ago
i'm very doubtful gmail mails are used to train the model by default, because emails contain private data and as soon as this private data shows up in the model output, gmail is done.

"gmail being read by gemini" does NOT mean "gemini is trained on your private gmail correspondence". it can mean gemini loads your emails into a session context so it can answer questions about your mail, which is quite different.

inkysigma•50m ago
Isn't Gmail covered under the Workspace privacy policy which forbids using that for training data. So I'm guessing that's excluded by the "in accordance" clause.
aoeusnth1•40m ago
This seems like a dubious conclusion. I think you missed this part:

> in accordance with Google’s relevant terms of service, privacy policy

recitedropper•24m ago
I'm pretty sure they mention in their various TOSes that they don't train on user data in places like Gmail.

That said, LLMs are the most data-greedy technology of all time, and it wouldn't surprise me that companies building them feel so much pressure to top each other they "sidestep" their own TOSes. There are plenty of signals they are already changing their terms to train when previously they said they wouldn't--see Anthropic's update in August regarding Claude Code.

If anyone ever starts caring about privacy again, this might be a way to bring down the crazy AI capex / tech valuations. It is probably possible, if you are a sufficiently funded and motivated actor, to tease out evidence of training data that shouldn't be there based on a vendor's TOS. There is already evidence some IP owners (like NYT) have done this for copyright claims, but you could get a lot more pitchforks out if it turns out Jane Doe's HIPAA-protected information in an email was trained on.

bilekas•58m ago
> The Gemini app surpasses 650 million users per month, more than 70% of our Cloud customers use our AI, 13 million developers have built with our generative models, and that is just a snippet of the impact we’re seeing

Not to be a negative nelly, but these numbers are definitely inflated due to Google literally pushing their AI into everything they can, much like M$. Can't even search google without getting an AI response. Surely you can't claim those numbers are legit.

blinding-streak•50m ago
Gemini app != Google search.

You're implying they're lying?

AstroBen•46m ago
And you're implying they're being 100% truthful?

Marketing is always somewhere in the middle

lalitmaganti•47m ago
> Gemini app surpasses 650 million users per month

Unless these numbers are just lies, I'm not sure how this is "pushing their AI into everything they can". Especially on iOS where every user is someone who went to App Store and downloaded it. Admittedly on Android, Gemini is preinstalled these days but it's still a choice that users are making to go there rather than being an existing product they happen to user otherwise.

Now OTOH "AI overviews now have two billion users" can definitely be criticised in the way you suggest.

aniforprez•42m ago
I don't know for sure but they have to be counting users like me whose phone has had Gemini force installed on an update and I've only opened the app by accident while trying to figure out how to invoke the old actually useful Assistant app
realusername•41m ago
> it's still a choice that users are making to go there rather than being an existing product they happen to user otherwise.

Yes and no, my power button got remapped to opening Gemini in an update...

I removed that but I can imagine that your average user doesn't.

edaemon•32m ago
I unlocked my phone the other day and had the entire screen taken over with an ad for the Gemini app. There was a big "Get Started" button that I almost accidentally clicked because it was where I was about to tap for something else.

As an Android and Google Workspace user, I definitely feel like Google is "pushing their AI into everything they can", including the Gemini app.

joaogui1•46m ago
It says Gemini App, not AI Overviews, AI Mode, etc
recitedropper•31m ago
They claim AI overviews as having "2 billion users" in the sentences prior. They are clearly trying as hard as possible to show the "best" numbers.
Yizahi•8m ago
This is benefit of bundling, I've been forecasting this for a long time - the only companies who would win the LLM race would be the megacorps bundling their offerings, and at most maybe OAI due to the sheer marketing dominance.

For example I don't pay for ChatGPT or Claude, even if they are better at certain tasks or in general. But I have Google One cloud storage sub for my photos and it comes with a Gemini Pro apparently (thanks to someone on HN for pointing it out). And so Gemini is my go to LLM app/service. I suspect the same goes for many others.

coffeecoders•53m ago
Feels like the same consolidation cycle we saw with mobile apps and browsers are playing out here. The winners aren’t necessarily those with the best models, but those who already control the surface where people live their digital lives.

Google injects AI Overviews directly into search, X pushes Grok into the feed, Apple wraps "intelligence" into Maps and on-device workflows, and Microsoft is quietly doing the same with Copilot across Windows and Office.

Open models and startups can innovate, but the platforms can immediately put their AI in front of billions of users without asking anyone to change behavior (not even typing a new URL).

Workaccount2•38m ago
AI overviews has arguable done more harm than good for them, because people assume it's Gemini, but really it's some ultra light weight model made for handling millions of queries a minute, and has no shortage of stupid mistakes/hallucinations.
acoustics•38m ago
Microsoft hasn't been very quiet about it, at least in my experience. Every time I boot up Windows I get some kind of blurb about an AI feature.
stevesimmons•51m ago
A nice Easter egg in the Gemini 3 docs [1]:

    If you are transferring a conversation trace from another model, ... to bypass strict validation in these specific scenarios, populate the field with this specific dummy string:

    "thoughtSignature": "context_engineering_is_the_way_to_go"
[1] https://ai.google.dev/gemini-api/docs/gemini-3?thinking=high...
scrollop•48m ago
Here it makes a text based video editor that works:

https://youtu.be/MPjOQIQO8eQ?si=wcrCSLYx3LjeYDfi&t=797

tylervigen•44m ago
I am personally impressed by the continued improvement in ARC-AGI-2, where Gemini 3 got 31.1% (vs ChatGPT 5.1's 17.6%). To me this is the kind of problem that does not lend itself well to LLMs - many of the puzzles test the kind of thing that humans intuit because of millions of years of evolution, but these concepts do not necessarily appear in written form (or when they do, it's not clear how they connect to specific ARC puzzles).

The fact that these models can keep getting better at this task given the setup of training is mind-boggling to me.

The ARC puzzles in question: https://arcprize.org/arc-agi/2/

grantpitt•33m ago
Agreed, it also leads performance on arc-agi-1. Here's the leaderboard where you can toggle between arc-agi-1 and 2: https://arcprize.org/leaderboard
casey2•36m ago
The first paragraph is pure delusion. Why do investors like delusional CEOs so much? I would take it as a major red flag.
qustrolabe•31m ago
Out of all other companies Google provide the most generous free access so far. I bet this gives them plenty of data to train even better models
serjester•29m ago
It's disappointing there's no flash / lite version - this is where Google has excelled up to this point.
aoeusnth1•5m ago
Maybe they're slow rolling the announcements to be in the news more
WXLCKNO•13m ago
Valve could learn from Google here
pflenker•8m ago
> Since then, it’s been incredible to see how much people love it. AI Overviews now have 2 billion users every month.

Come on, you can’t be serious.

Cloudflare Global Network experiencing issues

https://www.cloudflarestatus.com/?t=1
2011•imdsm•5h ago•1325 comments

Gemini 3 Pro Preview Live in AI Studio

https://aistudio.google.com/prompts/new_chat?model=gemini-3-pro-preview
342•preek•2h ago•143 comments

Gemini 3 for developers: New reasoning, agentic capabilities

https://blog.google/technology/developers/gemini-3-developers/
228•janpio•1h ago•61 comments

A Day at Hetzner Online in the Falkenstein Data Center

https://www.igorslab.de/en/a-day-at-hetzner-online-in-the-falkenstein-data-center-insights-into-s...
41•speckx•1h ago•5 comments

5 Things to Try with Gemini 3 Pro in Gemini CLI

https://developers.googleblog.com/en/5-things-to-try-with-gemini-3-pro-in-gemini-cli/
38•keithba•1h ago•9 comments

Gemini 3

https://blog.google/products/gemini/gemini-3/
244•meetpateltech•1h ago•66 comments

Google Antigravity, a New Era in AI-Assisted Software Development

https://antigravity.google/blog/introducing-google-antigravity
151•meetpateltech•1h ago•86 comments

Strix Halo's Memory Subsystem: Tackling iGPU Challenges

https://chipsandcheese.com/p/strix-halos-memory-subsystem-tackling
13•PaulHoule•50m ago•5 comments

Google Brings Gemini 3 AI Model to Search and AI Mode

https://blog.google/products/search/gemini-3-search-ai-mode/
36•CrypticShift•1h ago•3 comments

Solving a Million-Step LLM Task with Zero Errors

https://arxiv.org/abs/2511.09030
16•Anon84•1h ago•0 comments

Nearly all UK drivers say headlights are too bright

https://www.bbc.com/news/articles/c1j8ewy1p86o
408•YeGoblynQueenne•3h ago•389 comments

Short Little Difficult Books

https://countercraft.substack.com/p/short-little-difficult-books
79•crescit_eundo•3h ago•35 comments

How Quake.exe got its TCP/IP stack

https://fabiensanglard.net/quake_chunnel/index.html
343•billiob•9h ago•69 comments

Do Not Put Your Site Behind Cloudflare If You Don't Need To

https://huijzer.xyz/posts/123/do-not-put-your-site-behind-cloudflare-if-you-dont
295•huijzer•4h ago•223 comments

Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality

https://github.com/neul-labs/fast-litellm
11•ticktockten•1h ago•3 comments

Google Antigravity

https://antigravity.google/
137•Fysi•1h ago•105 comments

The Miracle of Wörgl

https://scf.green/story-of-worgl-and-others/
94•simonebrunozzi•6h ago•49 comments

Gemini 3 Pro Model Card

https://pixeldrain.com/u/hwgaNKeH
378•Topfi•5h ago•249 comments

Looking for Hidden Gems in Scientific Literature

https://elicit.com/blog/literature-based-discovery
6•ravenical•5d ago•0 comments

Mathematics and Computation (2019) [pdf]

https://www.math.ias.edu/files/Book-online-Aug0619.pdf
40•nill0•4h ago•9 comments

Ruby 4.0.0 Preview2 Released

https://www.ruby-lang.org/en/news/2025/11/17/ruby-4-0-0-preview2-released/
138•pansa2•3h ago•46 comments

GoSign Desktop RCE flaws affecting users in Italy

https://www.ush.it/2025/11/14/multiple-vulnerabilities-gosign-desktop-remote-code-execution/
43•ascii•4h ago•18 comments

Beauty in/of mathematics: tessellations and their formulas

https://www.tandfonline.com/doi/full/10.1080/00036811.2025.2510472
11•QueensGambit•5d ago•0 comments

How many video games include a marriage proposal? At least one

https://32bits.substack.com/p/under-the-microscope-ncaa-basketball
302•bbayles•5d ago•72 comments

I've Wanted to Play That 'Killer Shark' Arcade Game Briefly Seen in 'Jaws'

https://www.remindmagazine.com/article/15694/jaws-arcade-video-game-killer-shark-atari-sega-elect...
19•speckx•4d ago•4 comments

Langfuse (YC W23) Hiring OSS Support Engineers in Berlin and SF

https://jobs.ashbyhq.com/langfuse/5ff18d4d-9066-4c67-8ecc-ffc0e295fee6
1•clemo_ra•10h ago

The Uselessness of "Fast" and "Slow" in Programming

https://jerf.org/iri/post/2025/the_uselessness_of_fast/
94•zdw•6d ago•48 comments

Azure hit by 15 Tbps DDoS attack using 500k IP addresses

https://www.bleepingcomputer.com/news/microsoft/microsoft-aisuru-botnet-used-500-000-ips-in-15-tb...
451•speckx•23h ago•286 comments

The surprising benefits of giving up

https://nautil.us/the-surprising-benefits-of-giving-up-1248362/
164•jnord•12h ago•132 comments

Ditch your (mut)ex, you deserve better

https://chrispenner.ca/posts/mutexes
114•commandersaki•6d ago•132 comments