So far it's pretty good. We're at least one major version behind, but hey everything still works.
I cannot imagine other products support as many data sources (though I'm starting to think they all suck, I just dump what I can in InfluxDB).
I operate a fairly large custom VictoriaMetrics-based Observability platform and have learned early on to only use Grafana as opposed to other Grafana products. Part of the stack used to use Mimir's frontend as caching layer but even that died with Mimir v3.0, now that it can't talk to generic Prometheus APIs anymore (vanilla Prom, VictoriaMetrics, promxy etc.). I went back to Cortex for caching.
Such a custom stack is obviously not for everyone and takes much more time, knowledge and effort to deploy than some helm chart but overall I'd say it did save me some headache. At least when compared to the Google-like deprecation culture Grafana seems to have.
We're using a combination of Zabbix (alerting) and local Grafana/Prometheus/Loki (observability) at this point, but I've been worried about when Grafana will rug-pull for a while now. Hopefully enough people using their cloud offering sates their appetite and they leave the people running locally alone.
I'm out of that game now though so don't have the challenge.
The kicker for me recently was hearing someone say "ally"
Or without numbers,
authC/authN, authZ...
Besides that, if you're feeling masochistic you could use Prometheus' console templates or VictoriaMetrics' built-in dashboards.
Though these are all obviously nowhere near as feature rich and capable as Grafana and would only be able to display metrics for the single Prom/VM node they're running on. Might enough for some users.
Grafana dashboards itself (paired with VictoriaMetrics and occasionally Clickhouse) is one of the most pleasant web apps IMO. Especially when you don’t try to push the constraints of its display model, which are sometimes annoying but understandable.
Prometheus and Grafana have been progressing in their own ways and each of them is trying to have a fullstack solution and then the OTEL thingy came and ruined the party for everyone
Does OTEL mean we just need to replace all our collectors (like logstash for logs and all the native metrics collectors and pushgateway crap) and then reconfigure Prometheus and OpenSearch?
And we don't really have a simpler alternative in sight...at least in the java days there was the disgust and reaction via struts, spring, EJB3+, and of course other languages and communities.
Not sure how we exactly we got into such an over-engineered mono-culture in terms of operations and monitoring and deployment for 80%+ of the industry (k8s + graf/loki/tempo + endless supporting tools or flavors), but it is really a sad state.
Then you have endless implementations handling bits and pieces of various parts of the spec, and of course you have the tools to actually ingest and analyze and report on them.
The pushgateway's documentation itself calls out that there are only very limited cirumstances where it makes sense.
I personally only used it in $old_job and only for batch jobs that could not use the node_exporter's textfile collector. I would not use it again and would even advise against it.
The author is 100% correct: Monitoring should be the most boring tool in the stack. Its one and only job is to be more reliable than the thing it's monitoring.
The moment your monitoring stack requires a complex dependency like Kafka, or changes its entire agent flow every 18 months, it has failed its primary purpose. It has become the problem.
This sounds less like a technical evolution and more like the classic VC-funded push to get everyone onto a high-margin cloud product, even at the cost of the open-source soul.
This isn't a Grafana problem, this is an industry wide problem. Resume driven product design, resume driven engineering, resume driven marketing. DO your 2-3 years, pump out something big to inflate your resume. Apply elsewhere to get the pay bump that almost no company is handing out. After the departures there is no one left who knows the system and the next people in want to replace the things they don't understand to pad their resume for the next job.
Wash, rinse, repete.
Loyalty, simply goes unrewarded in a lot of places in our industry (and at a many corporations). And the people who do stay... in many cases they turn into furniture that ends up holding potential good evolution back. They loose out to the technological magpies the bring shiny things to management because it will "move the needle".
Sadly this is just one facet of the problems we are facing, from how we interview to how we run (or rent) our infrastructure things have gotten rather silly...
The days where you could devote your career to a firm and retire with a pension are long gone
The author of this article wants a boring tech stack that just works, and honestly after everything we’ve been through in the last five years, I kinda want a boring job I can keep until I retire, too
elastic stack is so heavy it's out of question for smaller clusters, loki integration with grafana is nice to have but separate capable dashboard would be also fine
I think this would not need to be an issue as frequently if prometheus had a more efficient publish/scraping mechanism. iirc there was once a protobuf metric format that was dropped, and now there is just the text format. While it wouldn't handle billions of unique labels like mimir, a compact binary metric format could certainly allow for millions at reasonable resolution instead of wasting all that scale potential on repeated name strings. I should be able to push or expose a bulk blob all at once with ordered labels or at least raw int keys.
That's besides the point that most customers will never need that level of scale. If you're not running Mimir on a dedicated Kubernetes cluster (or at least a dedicated-to-Grafana / observability cluster) then it's probably over-engineered for your use-case. Just use Prometheus.
eduction•2h ago
StackTopherFlow•2h ago
pseidemann•2h ago
incorrect-horse•1h ago