This is one of those cases where it's really helpful to code, at least once, at one layer of abstraction below the one that seems most natural to you.
Gonna point Claude at our repo and see if I can do an easy conversion, makes the amount of reviews I have to do a bit more bearable.
- It’s conceptually simple. An agent is just an object, you assign it tools that are just functions, and agents can call other agents.
- It’s "batteries included". You get a built-in code execution environment for doing math, session management, and web-server mode for debugging with a front-end.
- Optional callbacks provide clean hooks into the magic (for example, anonymizing or de-anonymizing data before and after LLM calls).
- It integrates with any model, supports MCP servers, and easy enough to hack in your existing session management system.
I'm working on a course in agent development and it's the framework I plan to teach with.
I would absolutely take this for a spin if I didn't hate Go so much :)
Python is way more ergonomic when dealing with text than go. Go's performance advantages are basically irrelevant in an AI agent, as execution time is dominated by inference time.
Personally I could see Go being quite nice to use if you want to deploy something as eg a compiled serverless function.
I'm assuming the framework behaves the same way regardless of language so you could test using Python first if you want and then move over to eg Go if needed.
Go is the sweet spot in expressive concurrency, a compile time type system, and a strong standard library with excellent tooling as you mentioned.
My hope is that, similar to Ruby in web development, Python's mind share in LLM coding will be siphoned to Go.
Fwiiw I noticed that colleagues using other languages like Java and JS with Claude Code sometimes get compile errors. I never get compile errors (anymore) with Go. The language is ideal for LLMs. Cant tell how CC is doing lately for Python.
Inference time is only the bottleneck if you are running a single agent loop, for a single consumer, with a single inference call being made at a time.
If you are serving a bunch of users, handling a bunch of requests, not all of which result in inference calls, some of which may result in multiple inference calls being made in parallel in independent contexts, you start to understand that concurrency matters a lot.
Might as well start with a language that helps you handle that concurrency instead of a language that treats it (asyncio) as a bastard edge case undeserving of first-class support.
And… none of them are Gemini specific. You can use them with any model you like, including Gemini.
I’m not an expert but comparing it to langgraph, it’s more opinionated , less flexible. But, easier to get started for basic agent apps.
Worth a spin.
czbond•2mo ago
JyB•2mo ago