You can ask things like:
“Why did latency spike last night?” “Show me 5xx errors from the payments service.” “Create a time series graph showing error counts for the last 6 hours.”
..and Jod will pull, summarize, or even visualize the answers for you.
Before Jod, we spent countless hours digging through CloudWatch and deployment logs, juggling 10+ dashboards just to trace one issue. It often took as much time as writing the actual code. During incidents, things got even worse, too much noise, endless context switching, and a lot of repetitive work. We figured we couldn’t be the only ones feeling that pain, so we decided to build something that could make the process a little easier.
Right now, Jod connects to CloudWatch through an MCP server, which streams responses to the backend over SSE, and the client displays everything in a conversational interface. You can ask questions about your logs, request visualizations with the @Graph annotation, or dig deeper into errors and trends. We’ve actually debugged and fixed multiple issues in Jod’s own codebase using Jod itself.
That said, it’s still early days, and there’s a lot we want to improve. On our short-term roadmap, we plan to:
- Add support for metrics and traces, not just logs.
- Expand to other providers like Azure and GCP.
- Release a standalone MCP server so developers can plug it into their own AI clients.
If any of this resonates with you, we’d love for you to try it out: https://jodmcp.com. It’s free to get started!
We’d really appreciate your feedback, bug reports, and suggestions on this. Thank you.
jasbir13•2h ago
prastik•2h ago