But while building it, I kept running into the same pattern: when everything worked, logs were in ClickHouse. When things broke, logs were still inside Kubernetes.
That gap led to adding Kubernetes as a native log source.
This is not meant to replace proper log aggregation. Centralized storage with indexing and retention policies is still the right approach for production.
But there are situations where aggregation doesn't help: the logging pipeline is broken, logs are delayed, or you're debugging locally and don't have a pipeline at all.
In those cases, the logs are already in the pods. The usual fallback is kubectl logs (or stern), often across multiple terminals and namespaces. It works, but correlation becomes manual.
Telescope can now query logs directly from Kubernetes clusters via the Kubernetes API. It lets you query across multiple namespaces and clusters, filter by labels and fields, apply time ranges, normalize severity across different log formats, and visualize log volume over time.
It uses your existing kubeconfig, fetches logs in parallel (configurable concurrency), and uses time filters to limit data transfer from Kubernetes APIs.
No agents. No CRDs. No cluster changes.
Current limitations: no streaming / follow mode yet.
Curious if others have run into the same "pipeline gap" problem - when logs aren't in your backend yet, but you still need structured access to them.
GitHub: https://github.com/iamtelescope/telescope
Changelog: https://docs.iamtelescope.net/changelog