The article makes a fair case for sticking with OTel, but it also feels a bit like forcing a general purpose tool into a domain where richer semantics might genuinely help. “Just add attributes” sounds neat until you’re debugging a multi-agent system with dynamic tool calls. Maybe hybrid or bridging standards are inevitable?
Curious if others here have actually tried scaling LLM observability in production like where does it hold up, and where does it collapse? Do you also feel the “open standards” narrative sometimes carries a bit of vendor bias along with it?
CuriouslyC•56m ago
perfmode•41m ago
mindcrime•20m ago
https://github.com/Arize-ai/phoenix