I've been building Modulo AI for the past year - an AI system that fixes GitHub issues.
Early versions took 5+ minutes to analyze a single issue.
After months of optimization, we're now sub-60 seconds with better accuracy. This presentation encapsulates what we learned about the performance characteristics of production LLM systems that nobody talks about.
- Strategies for faster token throughput.
- Strategies for quick time to first token.
- Effective context window management and
- Model routing strategies.
If you're interested in building AI agents, I'm sure you'll find some interesting insights in it!
zuzuen_1•1h ago
Early versions took 5+ minutes to analyze a single issue.
After months of optimization, we're now sub-60 seconds with better accuracy. This presentation encapsulates what we learned about the performance characteristics of production LLM systems that nobody talks about.
- Strategies for faster token throughput.
- Strategies for quick time to first token.
- Effective context window management and
- Model routing strategies.
If you're interested in building AI agents, I'm sure you'll find some interesting insights in it!
Install and try out our Github application: https://github.com/apps/solve-bug Try Modulo via browser at: https://moduloware.ai
Here are the code examples for the presentation: https://github.com/kirtivr/pydelhi-talk
What performance issues have you been seeing in your AI agents? And how did you tackle them?