frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Gemini 3.1 Flash-Lite: Built for intelligence at scale

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-lite/
28•meetpateltech•1h ago

Comments

sh4jid•1h ago
The Gemini Pro models just don't do it for me. But I still use 2.5 Flash Lite for a lot of my non-coding jobs, super cheap but great performance. I am looking forward to this upgrade!
simianwords•1h ago
same - pro is usually a miss for me.
sync•1h ago
Unfortunate, significant price increase for a 'lite' model: $0.25 IN / $1.50 OUT vs. Gemini 2.5 Flash-Lite $0.10 IN / $0.40 OUT.
zacksiri•1h ago
This is going to be a fun one to play with. I've been conducting tests on various models for my agentic workflow.

I was just wishing they would make a new flash-lite model, these things are so fast. Unfortunately 2.5-flash and therefore 2.5-flash-lite failed some of my agentic workflows.

If 3.1-flash-lite can do the job, this solves basically all latency issues for agentic workflows.

I publish my benchmarks here in case anyone is interested:

https://upmaru.com/llm-tests/simple-tama-agentic-workflow-q1...

P.S: The pricing bump is quiet significant, but still stomachable if it performs well. It is significant though.

guerython•1h ago
Flash-Lite’s $0.25/$1.50 price finally lets us run the translation+compliance queue without ripping through tokens. We push 400 req/s but keep a 20-second fuzzy cache of hashed prompts and only send the de-duplicated, heuristically filtered text so the model never re-processes the same boilerplate. The thinking-level knob is huge: level 1 by default gives us sub-200ms TTFB and we only bump to level 3 for flagged QA summaries. Anyone else pairing path-specific thinking levels with caches to keep high-frequency workloads sane?
zacksiri•1h ago
Yes, my workflows use caching intensively. It's the only way to keep things fast / economical.
rohansood15•1h ago
For the last 2 years, startup wisdom has been that models will continue to get cheaper and better. Claude first, and now Gemini has shown that it's not the case.

We priced an enterprise contract using Flash 1.5 pricing last summer, and today that contract would be unit economic negative if we used Flash 3. Flash 2.5 and now Flash 3.1 Lite barely breaks even.

I predict open-source models and fine-tuning are going to make a real comeback this year for economic reasons.

typs•59m ago
I mean the same level of intelligence does get cheaper. People just care about being on the frontier. But if you track a single level of intelligence the price just drops and drops.
rohansood15•41m ago
What's the cheaper alternative from Gemini for Flash-2.5-lite level intelligence when it gets deprecated on 22nd July 2026?
simianwords•54m ago
Not true. You just measure cost by amount of money spent per task. I would argue that this lite version is equivalent to older flash.
rohansood15•44m ago
Yea but there is a whole world of tasks for which Flash 2.5-lite was sufficiently intelligent. Given Google's depreciation policy, there will soon be no way to get that intelligence at that price.
simianwords•20m ago
I hope they release models at every intelligence resolution although the thinking effort can be a good alternative
dktp•35m ago
Opus 4.5 became significantly cheaper than Opus 4.1
xnx•24m ago
> We priced an enterprise contract using Flash 1.5 pricing last summer,

Interesting. Flash 1.5 was already a year old at that point.

k9294•55m ago
You can test Gemini 3.1 Lite transcription capabilities in https://ottex.ai — the only dictation app supporting Gemini models with native audio input.

We benchmarked it for real-life voice-to-text use cases:

                <10s    10-30s   30s-1m    1-2m    2-3m
  Flash         2548     2732     3177     4583    5961
  Flash Lite    1390     1468     1772     2362    3499
  Faster by    1.83x    1.86x    1.79x   1.94x   1.70x

  (latency in ms, median over 5 runs per sample, non-streaming)
Key takeaways:

- 1.8x faster than Gemini 3 Flash on average - 1-2 sec transcription time for short recordings - ~$0.50/mo for heavy users (10h+ transcription) - Best-in-class WER and formatting instruction following - Multilingual: one model, 100+ languages

Gemini is slowly making $15/month voice apps obsolete.

simianwords•52m ago
You know what would be great? A light weight wrapper model for voice that can use heavier ones in the background.

That much is easy but what if you could also speak to and interrupt the main voice model and keep giving it instructions? Like speaking to customer support but instead of putting you on hold you can ask them several questions and get some live updates

k9294•9m ago
It's actually a nice idea - an always-on micro AI agent with voice-to-text capabilities that listens and acts on your behalf.

Actually, I'm experimenting with this kind of stuff and trying to find a nice UX to make Ottex a voice command center - to trigger AI agents like Claude, open code to work on something, execute simple commands, etc.

stri8ted•39m ago
Can you show some comparisons for WER and other ASR models? Especially for non english.
k9294•16m ago
I've been experimenting with Gemini 3.1 Flash Lite and the quality is very good.

I haven't found official benchmarks yet, but you can find Gemini 3 Flash word error rate benchmarks here: https://artificialanalysis.ai/speech-to-text/models/gemini — they are close to SOTA.

I speak daily in both English and Russian and have been using Gemini 3 Flash as my main transcription model for a few months. I haven't seen any model that provides better overall quality in terms of understanding, custom dictionary support, instruction following, and formatting. It's the best STT model in my experience. Gemini 3 Flash has somewhat uncomfortable latency though, and Flash Lite is much better in this regard.

GodelNumbering•44m ago
That's a 150% increase in the input costs and 275% increase on output costs over the same sized previous generation (2.5-flash-lite) model
xnx•40m ago
I'm still clinging to gemini-2.0-flash which I think is free free for API use(?!).
vlmutolo•33m ago
Lots of comments about the price change, but Artifical Analysis reports that 3.1 Flash-Lite (reasoning) used fewer than half of the tokens of 2.5 Flash-Lite (reasoning).

This will likely bring the cost below 2.5 flash-lite for many tasks (depends on the ratio of input to output tokens).

That said, AA also reports that 3.1 FL was 20% more expensive to run for their complete Intelligence index benchmark.

The overall point is that cost is extremely task-dependent, and it doesn’t work to just measure token cost because reasoning can burn so many tokens, reasoning token usage varies by both task and model, and similarly the input/output ratios vary by task.