I built AgentLens because debugging multi-agent systems is painful. LangSmith is cloud-only and paid. Langfuse tracks LLM calls but doesn't understand agent topology — tool calls, handoffs, decision trees.
AgentLens is a self-hosted observability platform built specifically for AI agents:
- *Topology graph* — see your agent's tool calls, LLM calls, and sub-agent spawns as an interactive DAG - *Time-travel replay* — step through an agent run frame-by-frame with a scrubber timeline - *Trace comparison* — side-by-side diff of two runs with color-coded span matching - *Cost tracking* — 27 models priced (GPT-4.1, Claude 4, Gemini 2.0, etc.) - *Live streaming* — watch spans appear in real-time via SSE - *Alerting* — anomaly detection for cost spikes, error rates, latency - *OTel ingestion* — accepts OTLP HTTP JSON, so any OTel-instrumented app works
Works with LangChain, CrewAI, AutoGen, LlamaIndex, and Google ADK.
Tech: React 19 + FastAPI + SQLite/PostgreSQL. MIT licensed. 231 tests, 100% coverage.
docker run -p 3000:3000 tranhoangtu/agentlens-observe:0.6.0
pip install agentlens-observe
Demo GIF and screenshots in the README.GitHub: https://github.com/tranhoangtu-it/agentlens-observe Docs: https://agentlens-observe.pages.dev
I'd love feedback on the trace visualization approach and what features matter most for your agent debugging workflow.