Gemini 3.1 Flash-Lite is our most cost-efficient Gemini model, optimized for low latency use cases for high-volume, cost-sensitive LLM traffic.
It provides a significant quality increase over Gemini 2.0 Flash-Lite and Flash-Lite models, matching Gemini 2.5 Flash performance across key capability areas:
Improved response quality: Aims to match 2.5 Flash performance and align with target Flash-Lite use cases.
Improved instruction following: Targeted improvements to serve as a reliable migration path for complex chatbot and instruction-heavy workflows.
Improved audio input: Improved audio-input quality for tasks like Automated Speech Recognition (ASR).
Expanded thinking support: You can control how much reasoning the model performs by choosing from minimal, low, medium, or high thinking levels. This feature lets you balance response quality and speed for your specific use case.
---
Already available in Google AI Studio and OpenRouter
k9294•1h ago
It provides a significant quality increase over Gemini 2.0 Flash-Lite and Flash-Lite models, matching Gemini 2.5 Flash performance across key capability areas:
Improved response quality: Aims to match 2.5 Flash performance and align with target Flash-Lite use cases.
Improved instruction following: Targeted improvements to serve as a reliable migration path for complex chatbot and instruction-heavy workflows.
Improved audio input: Improved audio-input quality for tasks like Automated Speech Recognition (ASR).
Expanded thinking support: You can control how much reasoning the model performs by choosing from minimal, low, medium, or high thinking levels. This feature lets you balance response quality and speed for your specific use case.
---
Already available in Google AI Studio and OpenRouter
https://openrouter.ai/google/gemini-3.1-flash-lite-preview