This interested me as a simple-ish approach to a tough-ish promblem. From the example Trip Planner file:
"""Command line interface to process a trip request.
We use Gemini flash-lite to formalize freeform trip request into the dates and
destination. Then we use a second model to compose the trip itinerary.
* This simple example shows how we can reduce perceived latency by running a fast
model to validate and acknowledge user request while the good but slow model is
handling it.
The approach from this example also can be used as a defense mechanism against
prompt injections. The first model without tool access formalizes the request
into the TripRequest dataclass. The attack surface is significantly reduced by
the narrowness of the output format and lack of tools. Then a second model is
run on this cleanup up input.
*
Before running this script, ensure the `GOOGLE_API_KEY` environment
variable is set to the api-key you obtained from Google AI Studio.
qwertox•7mo ago
Also the research agent example at https://colab.research.google.com/github/google-gemini/genai...
foruhar•6mo ago
"""Command line interface to process a trip request.
We use Gemini flash-lite to formalize freeform trip request into the dates and destination. Then we use a second model to compose the trip itinerary.
* This simple example shows how we can reduce perceived latency by running a fast model to validate and acknowledge user request while the good but slow model is handling it.
The approach from this example also can be used as a defense mechanism against prompt injections. The first model without tool access formalizes the request into the TripRequest dataclass. The attack surface is significantly reduced by the narrowness of the output format and lack of tools. Then a second model is run on this cleanup up input. *
Before running this script, ensure the `GOOGLE_API_KEY` environment variable is set to the api-key you obtained from Google AI Studio.
Usage: python3 trip_request_cli.py """