I just generate a random UUID in the application and make sure to log it everywhere across the entire stack along with a timestamp.
Any old log aggregator can give me an accurate timeline grouped by request UUID across every backend component all in one dashboard.
It's the very first thing that I have the application do when handling a request. It's injected it at the log handler level. There's nothing to break and nothing to think about.
So, I have no problem knowing precise cause and effect with regard to all logs for a given isolated request, but I agree that there may be blips that affect multiple requests (outages, etc.). We have synthetic tests for outages though.
I too am struggling to understand what this tool does beyond grouping all logs by a unique request identifier.
They spend the whole page talking about a scenario that I've only seen happen in production when there were no app devs involved and people are allergic to writing a log format string let alone a single line of code.
Scout is our otel-native observability product (data lake, UI, alerts, analytics, mcp, the works). what we call pgX in the blog is an add-on to Scout.
> Before configuring pgX, you need to set up PostgreSQL metrics collection:
Click the link.
> Prerequisites > PostgreSQL instance > Scout account and API credentials > Scout Collector installed and configured (see Quick Start)
Multiple clicks to find out I need a separate account somewhere (wth is scout?). That's gonna be a no from me dawg.
At least when places like Datadog do content marketing they provide ways to monitor the services using tools that don't require paying them money.
This is a feature of an observability product called Scout. It's not a standalone tool.
muteh•4w ago
chatmasta•4w ago
rnjn•4w ago