frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity

https://arstechnica.com/google/2025/11/google-unveils-gemini-3-ai-model-and-ai-first-ide-called-a...
1•Bender•53s ago•0 comments

Semantic Query Engines with Matthew Russo (MIT)

1•CShorten•55s ago•0 comments

Show HN: InsForge – A Postgres BaaS built for prompt-driven development

https://insforge.dev/
1•tonychang430•2m ago•0 comments

Alexander Grothendieck

https://en.wikipedia.org/wiki/Alexander_Grothendieck
1•tosh•3m ago•0 comments

A Day at Hetzner Online in the Falkenstein Data Center

https://www.igorslab.de/en/a-day-at-hetzner-online-in-the-falkenstein-data-center-insights-into-s...
2•speckx•4m ago•0 comments

Why Bill Gates Warned Microsoft's CEO over OpenAI Investment

https://businesschief.com/news/why-bill-gates-warned-microsofts-ceo-over-openai-investment
1•ZeljkoS•4m ago•0 comments

Peec AI raised $21M Series A to help brands win in AI search

https://peec.ai/blog/we-raised-21m-series-a-to-help-brands-win-in-ai-search
1•janpio•5m ago•0 comments

Databricks in talks to raise capital at above $130B valuation

https://www.reuters.com/business/databricks-talks-raise-capital-130-billion-valuation-information...
1•Avalaxy•5m ago•0 comments

Real-time interactive words in chat(open source – MIT)

https://trustquery.com/
1•ronitelman•5m ago•1 comments

Lionel Messi says MLS must loosen spending rules in order to thrive

https://www.theguardian.com/football/2025/oct/28/lionel-messi-mls-spending-world-cup-2026-argenti...
1•PaulHoule•6m ago•0 comments

Lambda.ai announces Series E, raises over $1.5B

https://lambda.ai/blog/lambda-raises-over-1.5b-from-twg-global-usit-to-build-superintelligence-cl...
2•genpage•6m ago•0 comments

The McDonalds Test (2019)

https://www.plough.com/en/topics/justice/social-justice/economic-justice/the-mcdonalds-test
1•tosh•7m ago•0 comments

PSA syncthing-fork has changed owners

https://forum.syncthing.net/t/does-anyone-know-why-syncthing-fork-is-no-longer-available-on-githu...
1•raybb•7m ago•0 comments

Starlink's method of dodging solar storms may make it slower, for longer

https://www.theregister.com/2025/11/18/starlinks_method_of_dodging_solar/
2•dangle1•8m ago•0 comments

Digital Land for AI: WHO Owns the Graph Owns the Universe

https://medium.com/@strategymat/digital-land-for-ai-who-owns-the-graph-owns-the-universe-89af83d7...
1•Mati16•8m ago•0 comments

The Beneventan Memory

https://www.tiro.com/articles/beneventan
1•sonofzork•9m ago•1 comments

The code and open-source tools I used to produce a science fiction anthology

https://compellingsciencefiction.com/posts/the-code-and-open-source-tools-i-used-to-produce-a-sci...
1•mojoe•12m ago•0 comments

Roblox to block children from talking to adult strangers after string of lawsuit

https://www.theguardian.com/games/2025/nov/18/roblox-facial-age-estimation-children-adults-chats-...
2•crtasm•12m ago•1 comments

5 Things to Try with Gemini 3 Pro in Gemini CLI

https://developers.googleblog.com/en/5-things-to-try-with-gemini-3-pro-in-gemini-cli/
2•keithba•13m ago•0 comments

Google Antigravity is an 'agent-first' coding tool built for Gemini 3

https://www.theverge.com/news/822833/google-antigravity-ide-coding-agent-gemini-3-pro
1•prodigycorp•13m ago•0 comments

Google Brings Gemini 3 AI Model to Search and AI Mode

https://blog.google/products/search/gemini-3-search-ai-mode/
2•CrypticShift•15m ago•0 comments

Vercel.com: Do not use Vercel

3•scosman•15m ago•1 comments

Google Antigravity, a New Era in AI-Assisted Software Development

https://antigravity.google/blog/introducing-google-antigravity
15•meetpateltech•16m ago•0 comments

How long can it take to become a US citizen?

https://usafacts.org/articles/how-long-can-it-take-to-become-a-us-citizen/
11•speckx•17m ago•9 comments

The Web Game Database

https://webgamedb.com
1•klaussilveira•17m ago•0 comments

Gemini 3

https://deepmind.google/models/gemini/
6•dmotz•18m ago•0 comments

Gemini 3 for developers: New reasoning, agentic capabilities

https://blog.google/technology/developers/gemini-3-developers/
67•janpio•19m ago•3 comments

A Nosological Framework for Understanding Pathologies in Advanced AI

https://www.mdpi.com/2079-9292/14/16/3162
1•rbanffy•20m ago•0 comments

Gemini 3

https://blog.google/products/gemini/gemini-3/
80•meetpateltech•22m ago•6 comments

Detailed virtual brain simulations is changing how we study the brain

https://alleninstitute.org/news/one-of-worlds-most-detailed-virtual-brain-simulations-is-changing...
1•gmays•23m ago•0 comments
Open in hackernews

Gemini 3 Pro Preview Live in AI Studio

https://aistudio.google.com/prompts/new_chat?model=gemini-3-pro-preview
213•preek•1h ago

Comments

nilsingwersen•1h ago
Feeling great to see something confidential
RobinL•1h ago
- Anyone have any idea why it says 'confidential'?

- Anyone actually able to use it? I get 'You've reached your rate limit. Please try again later'. (That said, I don't have a paid plan, but I've always had pretty much unlimited access to 2.5 pro)

[Edit: working for me now in ai studio]

sd9•1h ago
How long does it typically take after this to become available on https://gemini.google.com/app ?

I would like to try the model, wondering if it's worth setting up billing or waiting. At the moment trying to use it in AI Studio (on the Free tier) just gives me "Failed to generate content, quota exceeded: you have reached the limit of requests today for this model. Please try again tomorrow."

Squarex•53m ago
Today I guess. They were not releasing the preview models this time and it seems the want to synchronize the release.
mpeg•52m ago
Allegedly it's already available in stealth mode if you choose the "canvas" tool and 2.5. I don't know how true that is, but it is indeed pumping out some really impressive one shot code

Edit: Now that I have access to Gemini 3 preview, I've compared the results of the same one shot prompts on the gemini app's 2.5 canvas vs 3 AI studio and they're very similar. I think the rumor of a stealth launch might be true.

sd9•42m ago
Thanks for the hint about Canvas/2.5. I have access to 3.0 in AI Studio now, and I agree the results are very similar.
csomar•43m ago
It's already available. I asked it "how smart are you really?" and it gave me the same ai garbage template that's now very common on blog posts: https://gist.githubusercontent.com/omarabid/a7e564f09401a64e...
magicalhippo•26m ago
> https://gemini.google.com/app

How come I can't even see prices without logging in... they doing regional pricing?

mil22•1h ago
It's available to be selected, but the quota does not seem to have been enabled just yet.

"Failed to generate content, quota exceeded: you have reached the limit of requests today for this model. Please try again tomorrow."

"You've reached your rate limit. Please try again later."

Update: as of 3:33 PM UTC, Tuesday, November 18, 2025, it seems to be enabled.

misiti3780•59m ago
seeing the same issue.
sottol•54m ago
you can bring your google api key to try it out, and google used to give $300 free when signing up for billing and creating a key.

when i signed up for billing via cloud console and entered my credit card, i got $300 "free credits".

i haven't thrown a difficult problem at gemini 3 pro it yet, but i'm sure i got to see it in some of the A/B tests in aistudio for a while. i could not tell which model was clearly better, one was always more succinct and i liked its "style" but they usually offered about the same solution.

lousken•54m ago
I hope some users will switch from cerebras to free up those resources
sarreph•52m ago
Looks to be available in Vertex.

I reckon it's an API key thing... you can more explicitly select a "paid API key" in AI Studio now.

CjHuber•47m ago
For me it’s up and running. I was doing some work with AI Studio when it was released and reran a few prompts already. Interesting also that you can now set thinking level low or high. I hope it does something, in 2.5 increasing maximum thought tokens never made it think more
r0fl•33m ago
Works for me.
informal007•1h ago
It seem that Google doesn't prepare well to release Gemini 3 but leak many contents, include the model card early today and gemini 3 on aistudio.google.com
guluarte•54m ago
it is live in the api

> gemini-3-pro-preview-ais-applets

> gemini-3-pro-preview

__jl__•52m ago
API pricing is up to $2/M for input and $12/M for output

For comparison: Gemini 2.5 Pro was $1.25/M for input and $10/M for output Gemini 1.5 Pro was $1.25/M for input and $5/M for output

hirako2000•45m ago
Google went from the phase of loss leader, to bait-to-switch.

They have started lock in with studio, I would say they are still in market penetration but stakeholders want to see path to profit so they are starting the price skimming, it's just the beginning.

mupuff1234•42m ago
I assume the model is just more expensive to run.
hirako2000•18m ago
Likely. The point is we would never know.
jhack•30m ago
With this kind of pricing I wonder if it'll be available in Gemini CLI for free or if it'll stay at 2.5.
raincole•28m ago
Still cheaper than Sonnet 4.5: $3/M for input and $15/M for output.
brianjking•26m ago
It is so impressive that Anthropic has been able to maintain this pricing still.
DeathArrow•47m ago
It generated a quite cool pelican on a bike: https://imgur.com/a/yzXpEEh
GodelNumbering•47m ago
And of course they hiked the API prices

Standard Context(≤ 200K tokens)

Input $2.00 vs $1.25 (Gemini 3 pro input is 60% more expensive vs 2.5)

Output $12.00 vs $10.00 (Gemini 3 pro output is 20% more expensive vs 2.5)

Long Context(> 200K tokens)

Input $4.00 vs $2.50 (same +60%)

Output $18.00 vs $15.00 (same +20%)

CjHuber•38m ago
Is it the first time long context has separate pricing? I hadn’t encountered that yet
Topfi•35m ago
Google has been doing that for a while.
brianjking•30m ago
Google has always done this.
CjHuber•28m ago
Ok wow then I‘ve always overlooked that.
1ucky•20m ago
Anthropic is also doing this for long context >= 200k Tokens on Sonnet 4.5
panarky•33m ago
Claude Opus is $15 input, $75 output.
aliljet•44m ago
When will this be available in the cli?
_ryanjsalva•23m ago
Gemini CLI team member here. We'll start rolling out today.
aliljet•21m ago
This is the heroic move everyone is waiting for. Do you know how this will be priced?
skerit•44m ago
Not the preview crap again. Haven't they tested it enough? When will it be available in Gemini-CLI?
CjHuber•31m ago
Honestly I liked 2.5 Pro preview much more than the final version
prodigycorp•44m ago
I'm sure this is a very impressive model, but gemini-3-pro-preview is failing spectacularly at my fairly basic python benchmark, which involves type analysis. In fact, gemini-2.5-pro gets a lot closer (but is still wrong).

For reference: gpt-5.1-thinking passes, gpt-5.1-instant fails, gpt-5-thinking fails, gpt-5-instant fails, sonnet-4.5 passes, opus-4.1 passes (lesser claude models fail).

This is a reminder that benchmarks are meaningless – you should always curate your own out-of-sample benchmarks. A lot of people are going to say "wow, look how much they jumped in x, y, and z benchmark" and start to make some extrapolation about society, and what this means for others. Meanwhile.. I'm still wondering how they're still getting this problem wrong.

m00dy•32m ago
that's why everyone using AI for code should code in rust only.
Filligree•30m ago
What's the benchmark?
petters•24m ago
Good personal benchmarks should be kept secret :)
ahmedfromtunis•23m ago
I don't think it would be a good idea to publish it on a prime source of training data.
Hammershaft•17m ago
He could post an encrypted version and post the key with it to avoid it being trained on?
benterix•9m ago
What makes you think it wouldn't end up in the training set anyway?
prodigycorp•22m ago
nice try!
mupuff1234•26m ago
Could also just be rollout issues.
prodigycorp•24m ago
Could be. I'll reply to my comment later with pass/fail results of a re-run.
ddalex•22m ago
I moved to using the model from python coding to golang coding and got incredible speedups in writing the correct version of the code
mring33621•13m ago
I agree that benchmarks are noise. I guess, if you're selling an LLM wrapper, you'd care, but as a happy chat end-user, I just like to ask a new model about random stuff that I'm working on. That helps me decide if I like it or not.

I just chatted with gemini-3-pro-preview about an idea I had and I'm glad that I did. I will definitely come back to it.

IMHO, the current batch of free, free-ish models are all perfectly adequate for my uses, which are mostly coding, troubleshooting and learning/research.

This is an amazing time to be alive and the AI bubble doomers that are costing me some gains RN can F-Off!

testartr•11m ago
and models are still pretty bad at playing tic-tac-toe, they can do it, but think way too much

it's easy to focus on what they can't do

prodigycorp•8m ago
Except I'm not nit picking at some limitations with tokenization, like "how many as are there in strawberry" . If you "understand" the code, you shouldn't be getting it wrong.
benterix•7m ago
> This is a reminder that benchmarks are meaningless – you should always curate your own out-of-sample benchmarks.

Yeah I have my own set of tests and the results are a bit unsettling in the sense that sometimes older models outperform newer ones. Moreover, they change even if officially the model doesn't change. This is especially true of Gemini 2.5 pro that was performing much better on the same tests several months ago vs. now.

Iulioh•59s ago
A lot of newer models are geared towards efficency and if you add the fact that more efficent models are trained on the output of less efficent (but more accurate) models....

GPT4/3o might be the best we will ever have

WhitneyLand•2m ago
>>benchmarks are meaningless

No they’re not. Maybe you mean to say they don’t tell the whole story or have their limitations, which has always been the case.

>>my fairly basic python benchmark

I suspect your definition of “basic” may not be consensus. Gpt-5 thinking is a strong model for basic coding and it’d be interesting to see a simple python task it reliably fails at.

Rover222•1m ago
curious if you tried grok 4.1 too
nickandbro•42m ago
What we have all been waiting for:

"Create me a SVG of a pelican riding on a bicycle"

https://www.svgviewer.dev/s/FfhmhTK1

Thev00d00•41m ago
That is pretty impressive.

So impressive it makes you wonder if someone has noticed it being used a benchmark prompt.

burkaman•34m ago
Simon says if he gets a suspiciously good result he'll just try a bunch of other absurd animal/vehicle combinations to see if they trained a special case: https://simonwillison.net/2025/Nov/13/training-for-pelicans-...
jmmcd•19m ago
"Pelican on bicycle" is one special case, but the problem (and the interesting point) is that with LLMs, they are always generalising. If a lab focussed specially on pelicans on bicycles, they would as a by-product improve performance on, say, tigers on rollercoasters. This is new and counter-intuitive to most ML/AI people.
ddalex•16m ago
https://www.svgviewer.dev/s/TVk9pqGE giraffe in a ferrari
bitshiftfaced•33m ago
It hadn't occurred to me until now that the pelican could overcome the short legs issue by not sitting on the seat and instead put its legs inside the frame of the bike. That's probably closer to how a real pelican would ride a bike, even if it wasn't deliberate.
xnx•30m ago
Very aero
CjHuber•42m ago
Interesting that they added an option to select your own API key right in AI studio‘s input field. I sincerely hope the times of generous free AIstudio usage are not over
golfer•39m ago
Supposedly this is the model card. Very impressive results.

https://pbs.twimg.com/media/G6CFG6jXAAA1p0I?format=jpg&name=...

Also, the full document:

https://archive.org/details/gemini-3-pro-model-card/page/n3/...

tweakimp•34m ago
Every time I see a table like this numbers go up. Can someone explain what this actually means? Is there just an improvement that some tests are solved in a better way or is this a breakthrough and this model can do something that all others can not?
rvnx•28m ago
This is a list of questions and answers that was created by different people.

The questions AND the answers are public.

If the LLM manages through reasoning OR memory to repeat back the answer then they win.

The scores represent the % of correct answers they recalled.

stavros•10m ago
I estimate another 7 months before models start getting 115% on Humanity's Last Exam.
HardCodedBias•14m ago
If you believe another thread the benchmarks are comparing Gemini-3 (probably thinking) to GPT-5.1 without thinking.

The person also claims that with thinking on the gap narrows considerably.

We'll probably have 3rd party benchmarks in a couple of days.

samuelknight•38m ago
"Gemini 3 Pro Preview" is in Vertex
ponyous•26m ago
Can’t wait to test it out. Been running a tons of benchmarks (1000+ generations) for my AI to CAD model project and noticed:

- GPT-5 medium is the best

- GPT-5.1 falls right between Gemini 2.5 Pro and GPT-5 but it’s quite a bit faster

Really wonder how well Gemini 3 will perform

santhoshr•19m ago
Pelican riding a bicycle: https://pasteboard.co/CjJ7Xxftljzp.png
mohsen1•11m ago
Some time I think I should spend $50 on Upwork to get a real human artist to do it first to know what is that we're going for. What a good pelican riding a bicycle SVG is actually looking like?
robterrell•8m ago
At this point I'm surprised they haven't been training on thousands of professionally-created SVGs of pelicans on bicycles.
Der_Einzige•17m ago
When will they allow us to use modern LLM samplers like min_p, or even better samplers like top N sigma, or P-less decoding? They are provably SOTA and in some cases enable infinite temperature.

Temperature continues to be gated to maximum of 0.2, and there's still the hidden top_k of 64 that you can't turn off.

I love the google AI studio, but I hate it too for not enabling a whole host of advanced features. So many mixed feelings, so many unanswered questions, so many frustrating UI decisions on a tool that is ostensibly aimed at prosumers...

ttul•16m ago
My favorite benchmark is to analyze a very long audio file recording of a management meeting and produce very good notes along with a transcript labeling all the speakers. 2.5 was decently good at generating the summary, but it was terrible at labeling speakers. 3.0 has so far absolutely nailed speaker labeling.
iagooar•8m ago
What prompt do you use for that?