> Agents are cardinality-hungry. They want the high-cardinality data you'd normally drop: individual trace IDs, per-request attributes, full tag sets. They are very patient. They will sift through it.
The agents themselves are not likely going to be doing the high cardinality queries or they will keel over. They have limited memory buffers. They will take many seconds to return results. They are likely going to be limited in terms of QPS.
From the blog:
> Apache Iceberg, with data stored as Parquet on S3, and most of the system implemented in Go
You have just ensured that queries will have a p99 >1 second. This is kind of antithetical to having an agent be fast.
You couldn't run any sort of real-time service, where hundreds of thousands to millions of events were occurring per second, and you needed to adjust to that in milliseconds.
The terms "p99" and "QPS" do not occur anywhere in the article. Which leaves the question of scalability to a user's imagination.
I applaud the direction. I am looking for objective evidence.
PeterCorless•1h ago
The agents themselves are not likely going to be doing the high cardinality queries or they will keel over. They have limited memory buffers. They will take many seconds to return results. They are likely going to be limited in terms of QPS.
From the blog: > Apache Iceberg, with data stored as Parquet on S3, and most of the system implemented in Go
You have just ensured that queries will have a p99 >1 second. This is kind of antithetical to having an agent be fast.
You couldn't run any sort of real-time service, where hundreds of thousands to millions of events were occurring per second, and you needed to adjust to that in milliseconds.
The terms "p99" and "QPS" do not occur anywhere in the article. Which leaves the question of scalability to a user's imagination.
I applaud the direction. I am looking for objective evidence.