frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Batch Mode in the Gemini API: Process More for Less

https://developers.googleblog.com/en/scale-your-ai-workloads-batch-mode-gemini-api/
168•xnx•7mo ago

Comments

tripplyons•7mo ago
For those who aren't aware, OpenAI has a very similar batch mode (50% discount if you wait up to 24 hours): https://platform.openai.com/docs/api-reference/batch

It's nice to see competition in this space. AI is getting cheaper and cheaper!

fantispug•7mo ago
Yes, this seems to be a common capability - Anthropic and Mistral have something very similar as do resellers like AWS Bedrock.

I guess it lets them better utilise their hardware in quiet times throughout the day. It's interesting they all picked 50% discount.

qrian•7mo ago
Bedrock has a batch mode but only for claude 3.5 which is like one year old, which isn't very useful.
calaphos•7mo ago
Inference throughout scales really well with larger batch sizes (at the cost of latency) due to rising arithmetic intensity and the fact that it's almost always memory BW limited.
briangriffinfan•7mo ago
50% is my personal threshold for a discount going from not worth it to worth it.
bayesianbot•7mo ago
DeepSeek has gone a bit different route - they give automatic 75% discount between UTC 16:30-00:30

https://api-docs.deepseek.com/quick_start/pricing

dlvhdr•7mo ago
The latest price increases beg to differ
dmos62•7mo ago
What price increases?
rvnx•7mo ago
I guess the Gemini price increase
dmos62•7mo ago
Ah, 2.5 flash non-thinking price was increased to match the price of 2.5 flash thinking.
Workaccount2•7mo ago
No, 2.5 flash non-thinking was replaced with 2.5 flash lite, and 2.5 flash thinking had it's cost rebalanced (input price increased/output price decreased)

2.5 flash non-thinking doesn't exist anymore. People call it a price increase but it's just confusion about what Google did.

sunaookami•7mo ago
They try to frame it as such but 2.5 Flash Lite is not the same as 2.5 Flash without thinking. It's worse.
dist-epoch•7mo ago
Only because Flash was mispriced to start with. It was set too cheap compared with its capabilities. They didn't raise the price of Pro.
laborcontract•7mo ago
One open secret is that batch mode generations often take much less than 24 hours. I've done a lot of generations where I get my results within 5ish minutes.
ridgewell•7mo ago
It can depend a lot on the shape of your batch to my understanding. A small batch job can be tasked out a lot quicker than a large batch job waiting for just the right moment where capacity fits.
dsjoerg•7mo ago
We used the previous version of this batch mode, which went through BigQuery. It didn't work well for us at the time because we were in development mode and we needed faster cycle time to iterate and learn. Sometimes the response would come back much faster than 24 hours, but sometimes not. There was no visibility offered into what response time you would get; just submit and wait.

You have to be pretty darn sure that your job is going to do exactly what you want to be able to wait 24 hours for a response. It's like going back to the punched-card era. If I could get even 1% of the batch in a quicker response and then the rest more slowly, that would have made a big difference.

cpard•7mo ago
It seems that the 24h SLA is standard for batch inference among the vendors and I wonder how useful it can be when you have no visibility on when the job will be delivered.

I wonder why they do that and who is actually getting value out of these batch APIs.

Thanks for sharing your experience!

vineyardmike•7mo ago
It’s like most batch processes, it’s not useful if you don’t know what the response will be and you’re iterating interactively. It for data pipelines, analytics workloads, etc, you can handle that delay because no one is waiting on the response.

I’m a developer working on a product that lets users upload content. This upload is not time sensitive. We pass the content through a review pipeline, where we did moderation and analysis, and some business-specific checks that the user uploaded relevant content. We’re migrating some of that to an LLM based approach because (in testing) the results are just as good, and tweaking a prompt is easier than updating code. We’ll probably use a batch API for this and accept that content can take 24 hours to be audited.

cpard•7mo ago
yeah I get that part of batch, but even with batch processing, you usually want to have some kind of sense of when the data will be done. Especially when downstream processes depend on that.

The other part that I think makes batch LLM inference unique, is that the results are not deterministic. That's where I think what the parent was saying about some of the data at least should be available earlier even if the rest will be available in 24h.

3eb7988a1663•7mo ago
Think of it like you have a large queue of work to be done (eg summarize N decades of historical documents). There is little urgency to the outcome because the bolus is so large. You just want to maintain steady progress on the backlog where cost optimization is more important than timing.
cpard•7mo ago
yes, what you describe feels like a one off job that you want to run, which is big and also not time critical.

Here's an example:

If you are a TV broadcaster and you want to summarize and annotate the content generated in the past 12 hours you most probably need to have access to the summaries of the previous 12 hours too.

Now if you submit a batch job for the first 12 hours of content, you might end up in a situation where you want to process the next batch but the previous one is not delivered yet.

And imo that's fine as long as you somehow know that it will take more than 12h to complete but it might be delivered to you in 1h or in 23h.

That's the part of the these batch APIs that I find hard to understand how you use in a production environment outside of one off jobs.

YetAnotherNick•7mo ago
Contrary to other comments it's likely not because of queue or general batch reasons. I think it is because that LLMs are unique in the sense that it requires lot of fixed nodes because of vRAM requirements and hence it is harder to autoscale. So likely the batch jobs are executed when they have free resources from interactive servers.
cpard•7mo ago
that makes total sense and what it entails is that interactive inference >>> batch inference in the market today in terms of demand.
dekhn•7mo ago
Yes, almost certainly in this case Google sees traffic die off when a data center is in the dark. Specifically, there is a diurnal cycle of traffic, and Google usually routes users to close-by resources. So, late at night, all those backends which were running hot doing low-latency replies to users in near-real-time can instead switch over to processing batches. When I built an idle cycle harvester at google, I thought most of hte free cycles would come from low-usage periods, but it turned out that some clusters were just massively underutilized and had free resources all 24 hours.
jampa•7mo ago
> who is actually getting value out of these batch APIs

I used the batch API extensively for my side project, where I wanted to ingest a large amount of images, extract descriptions, and create tags for searching. After you get the right prompt, and the output is good, you can just use the Batch API for your pipeline. For any non-time-sensitive operations, it is excellent.

cpard•7mo ago
What you describe makes total sense. I think that the tricky part is the "non-time-sensitive operations", in an environment where even if you don't care to have results in minutes, you have pipelines that run regularly and there are dependencies on them.

Maybe I'm just thinking too much in data engineering terms here.

dist-epoch•7mo ago
> you have no visibility on when the job will be delivered

You do have - within 24 hours. So don't submit requests you need in 10 hours.

serjester•7mo ago
We've submitted tens of millions of requests at a time and never had it take longer than a couple hours - I think the zone you submit to plays a role.
Jensson•7mo ago
> If I could get even 1% of the batch in a quicker response and then the rest more slowly, that would have made a big difference.

You can do this, just send 1% using the regular API.

Implicated•7mo ago
I was also rather puzzled at this comment - why not dev against real time endpoints and batch when you've got things where you need them?
lazharichir•7mo ago
You can also do gemini flash lite for a subset and then batch the rest with flash or pro
nnx•7mo ago
It would be nice if OpenRouter supported batch mode too, sending a batch and letting OpenRouter find the best provider for the batch within given price and response time.
pugio•7mo ago
Hah, I've been wrestling with this ALL DAY. Another example of Phenomenal Cosmic Powers (AI) combined with itty bitty docs (typical of Google). The main endpoint ("https://generativelanguage.googleapis.com/v1beta/models/gemi...") doesn't even have actual REST documentation in the API. The Python API has 3 different versions of the same types. One of the main ones (`GenerateContentRequest`) isn't available in the newest path (`google.genai.types`) so you need to find it in an older version, but then you start getting version mismatch errors, and then pydantic errors, until you finally decide to just cross your fingers and submit raw JSON, only to get opaque API errors.

So, if anybody else is frustrated and not finding anything online about this, here are a few things I learned, specifically for structured output generation (which is a main use case for batching) - the individual request JSON should resolve to this:

```json { "request": { "contents": [ { "parts": [ { "text": "Give me the main output please" } ] } ], "system_instruction": { "parts": [ { "text": "You are a main output maker." } ] }, "generation_config": { "response_mime_type": "application/json", "response_json_schema": { "type": "object", "properties": { "output1": { "type": "string" }, "output2": { "type": "string" } }, "required": [ "output1", "output2" ] } } }, "metadata": { "key": "my_id" } } ```

To get actual structured output, don't just do `generation_config.response_schema`, you need to include the mime-type, and the key should be `response_json_schema`. Any other combination will either throw opaque errors or won't trigger Structured Output (and will contain the usual LLM intros "I'm happy to do this for you...").

So you upload a .jsonl file with the above JSON, and then you try to submit it for a batch job. If something is wrong with your file, you'll get a "400" and no other info. If something is wrong with the request submission you'll get a 400 with "Invalid JSON payload received. Unknown name \"file_name\" at 'batch.input_config.requests': Cannot find field."

I got the above error endless times when trying their exact sample code: ``` BATCH_INPUT_FILE='files/123456' # File ID curl https://generativelanguage.googleapis.com/v1beta/models/gemi... \ -X POST \ -H "x-goog-api-key: $GEMINI_API_KEY" \ -H "Content-Type:application/json" \ -d "{ 'batch': { 'display_name': 'my-batch-requests', 'input_config': { 'requests': { 'file_name': ${BATCH_INPUT_FILE} } } } }" ```

Finally got the job submission working via the python api (`file_batch_job = client.batches.create()`), but remember, if something is wrong with the file you're submitting, they won't tell you what, or how.

TheTaytay•7mo ago
Thank you for posting this! (When I run into errors with posted sample code, I spend WAY too long assuming it’s my fault.)
nacholibrev•6mo ago
> So you upload a .jsonl file with the above JSON, and then you try to submit it for a batch job. If something is wrong with your file, you'll get a "400" and no other info. If something is wrong with the request submission you'll get a 400 with "Invalid JSON payload received. Unknown name \"file_name\" at 'batch.input_config.requests': Cannot find field."

Thanks for your post, I've stumbled upon the same issue as you.

So I should interpret the "Unknown name \"file_name\" at 'batch.input_config.requests'" as an error with the jsonl file and not the payload itself?

I'm trying to submit a batch with a .jsonl file, but I'm always getting the "Unknown name \"file_name\" at 'batch.input_config.requests'" error.

great_psy•7mo ago
Is this an indication of the peak of the AI bubble ?

In a way this is saying that there are some GPUs just sitting around so they would rather get 50% than nothing for their use.

graeme•7mo ago
Seems more like electricity pricing, which has peak and offpeak pricing for most business customers.

To handle peak daily load you need capacity that goes unused in offpeak hours.

reasonableklout•7mo ago
Why do you think that this means "idle GPU" rather than a company recognizing a growing need and allocating resources toward it?

It's cheaper because it's a different market with different needs which can be served by systems optimizing for throughput instead latency. Feels like you're looking for something that's not there.

dmitry-vsl•7mo ago
Is it possible to use batch mode with fine-tuned models?
segalord•7mo ago
Man googles offerings are so inconsistent, batch processing has been available on vertex for a while now, I dont really get why they have two different offering in vertex and gemini, both are equally inaccessible
nikolayasdf123•7mo ago
omg I realized this is not Vertex AI face-palm
rockwotj•7mo ago
It’s because vertex is the “entrrprise” offering that is hippa compliant, etc. That is why vertex only has explicit prompt caching and not implicit, etc. Vertex usage is never used for training or model feedback, but the gemini API does. Basically the Gemini API is Google’s way of being able to move faster like openai and the other foundational model providers, but still having an enterprise offering. Go check Anthropic’s documentation, they even say if you have enterprise or regulatory needs go use bedrock or vertex.
Deathmax•7mo ago
Vertex's offering of Gemini very much does implicit caching, and has always been the case [1]. The recent addition of applying implicit cache hit discounts also works on Vertex, as long as you don't use the `global` endpoint and hit one of the regional endpoints.

[1]: http://web.archive.org/web/20240517173258/https://cloud.goog..., "By default Google caches a customer's inputs and outputs for Gemini models to accelerate responses to subsequent prompts from the customer. Cached contents are stored for up to 24 hours."

druskacik•7mo ago
I've been using OpenAI's batch API for some time, then replaced it with Mistral's batch API because it was cheaper (Mistral Small with $0.10 / $0.20 per million tokens was perfect for my use case). This makes me rethink my choice, e.g. Gemini 2.5 Flash-Lite seems to be a better model[0] with only a slight price increase.

[0] https://artificialanalysis.ai/leaderboards/models

tucnak•7mo ago
I really hope it means that 2.5 models will be available for batching in Vertex, too. We had spent quite a bit of effort making it work with BigQuery, and it's really cool when it works. There's edge-case, though, where it doesn't work: in case the batch is also referring to cached prompt. We did report this a few months ago.
anupj•7mo ago
Batch Mode for the Gemini API feels like Google’s way of asking, “What if we made AI more affordable and slower, but at massive scale?” Now you can process 10,000 prompts like “Summarize each customer review in one line” for half the cost, provided you’re willing to wait until tomorrow for the results.
dist-epoch•7mo ago
Most LLM providers have batch mode. Not sure why you are calling them out.
okdood64•7mo ago
I'll take it further. Regular cloud compute have batch workload capabilities at cheaper rates, as well since forever.
diggan•7mo ago
> Now you can process 10,000 prompts like “Summarize each customer review in one line” for half the cost, provided you’re willing to wait until tomorrow for the results.

Sounds like a great option to have available? Not every task I use LLMs for need immediate responses, and if I wasn't using local models for those things, getting a 50% discount and having to wait a day sounds like a fine tradeoff.

XTXinverseXTY•7mo ago
This is an extremely common use case.

Reading your comment history: are you an LLM?

https://news.ycombinator.com/item?id=44531907

https://news.ycombinator.com/item?id=44531868

okdood64•7mo ago
I don't understand the point you're making. This has been a commonly used offering since cloud blew up.

https://aws.amazon.com/ec2/spot/

kerisi•7mo ago
I've been using this with nothing notable to mention besides there seems to be a common bug where you receive an empty text response.

https://discuss.ai.google.dev/t/gemini-2-5-pro-with-empty-re...

lopuhin•7mo ago
I find OpenAI's new flex processing more attractive, as it has the same 50% discount, but allows to use the same API as regular chat mode, so you can still do stuff where Batch API won't work (e.g. evaluating agents), and in practice I found it to work well enough when paired with client-side request caching: https://platform.openai.com/docs/guides/flex-processing?api-...
irthomasthomas•7mo ago
It's nice that they stack the batch pricing and caching discount. I asked the Google guy if they did the same but got no reply, so probably not.

Edit: anthropic also stack batching and caching discounts

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
15•novoreorx•1h ago•19 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
167•yi_wang•6h ago•56 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
82•RebelPotato•5h ago•20 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
273•valyala•13h ago•52 comments

Total surface area required to fuel the world with solar (2009)

https://landartgenerator.org/blagi/archives/127
33•robtherobber•4d ago•39 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
212•mellosouls•16h ago•360 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
80•swah•4d ago•149 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
172•surprisetalk•13h ago•171 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
15•pentagrama•2h ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
185•AlexeyBrin•19h ago•35 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
76•gnufx•12h ago•60 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
177•vinhnx•16h ago•18 comments

The Architecture of Open Source Applications (Volume 1) Berkeley DB

https://aosabook.org/en/v1/bdb.html
11•grep_it•5d ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
336•jesperordrup•23h ago•102 comments

First Proof

https://arxiv.org/abs/2602.05192
139•samasblack•16h ago•81 comments

Substack confirms data breach affects users’ email addresses and phone numbers

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
33•witnessme•2h ago•9 comments

Wood Gas Vehicles: Firewood in the Fuel Tank (2010)

https://solar.lowtechmagazine.com/2010/01/wood-gas-vehicles-firewood-in-the-fuel-tank/
39•Rygian•2d ago•13 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
88•momciloo•13h ago•18 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
85•chwtutha•4h ago•22 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
109•thelok•15h ago•24 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
593•theblazehen•3d ago•215 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
42•mbitsnbites•3d ago•6 comments

The world heard JD Vance being booed at the Olympics. Except for viewers in USA

https://www.theguardian.com/sport/2026/feb/07/jd-vance-boos-winter-olympics
66•treetalker•34m ago•14 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
318•1vuio0pswjnm7•20h ago•522 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
117•randycupertino•9h ago•245 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
164•speckx•4d ago•247 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
909•klaussilveira•1d ago•277 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
36•languid-photic•4d ago•18 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
305•isitcontent•1d ago•39 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
6•todsacerdoti•5h ago•1 comments