Also, I've ended up being responsible for infra and observability at a few startups now, and you are completely correct about the amount of waste and unnecessary cost. Looking forward to trying out Tero.
And yes, data waste in this space is absurdly bad. I don't think people realize how bad it actually is. I estimate ~40% of the data (being conservative) is waste. But now we know - and knowing is half the battle :)
First of all, thanks for you (and the team’s) work on Vector. It is one of my favorite pieces of software, and I rave about it pretty much daily.
New endeavor sounds very exciting, and I definitely can relate to the problem. Are there plans to allow Teri to be used in on-premises environments and self-hosted?
Thank you and good luck!
And yes, Tero is fundamentally a control plane that hooks into your data plane (whatever that is for you: OTel Collector, Datadog Agent, Vector, etc). It can run on-prem, use your own approved AI, completely within your network, and completely private.
However, as always, the problem is more political than technical and those are hardest problems to solve and another service with more cost IMO won't solve it. However, there is plenty of money to be made in attempting to solve it so go get that bag. :)
At end of day, it's back to DevOps mentality and it's never caught on at most companies. Devs don't care, Project Manager wants us to stop block feature velocity and we are not properly staffed since we are "massive wasteful cost center".
Tero doesn't just tell you how much is waste. It breaks down exactly what's wrong, attributes it to each service, and makes it possible for teams to finally own their data quality (and cost).
One thing I'm hoping catches on: now that we can put a number on waste, it can become an SLO, just like any other metric teams are responsible for. Data quality becomes something that heals itself.
It replaced both Filebeat and Logstash for us with a single binary that actually has sane resource usage (no more JVM nightmares). VRL turned out to be way more powerful than we could imagine - we do all our log parsing, metadata enrichment, and routing to different ClickHouse tables in one place. The agent/aggregator topology with disk buffering is pretty dope.
Genuinely one of my favorite pieces of infra software. Good luck with Tero.
All major vendors have a nice dashboard and sometimes alerts to understand usage (broken down by signal type or tags) ... but there's clearly a need for more advanced analysis which Tero seems to be going after.
Speaking of the elephant in room in observability: why does storing data on a vendor cost so much in the first place? With most new observability startups choosing to store store data in columar formats on cheap object storage, think this is also getting challenged in 2026. The combination of cheap storage with meaningful data could breathe some new life into the space.
Excited to see what Tero builds.
Well, yea, sort of the magic of the regular expression <-> NFA equality theorem. Any regex can be converted to a state machine. And since you can combine regexes (and NFAs!) procedurally, this is not a surprising result.
> I ran it against the first service: ~40% waste. Another: ~60%. Another: ~30%. On average, ~40% waste.
I'm surprised it's only 40%. Observability seems to be treated like fire suppression systems: all important in a crisis, but looks like waste during normal operations.
> The AI can't find the signal because there's too much garbage in the way.
There's surprisingly simple techniques to filter out much of the garbage: compare logs from known good to known bad, and look for the stuff thats' strongly associated with bad. The precise techniques seem bayesian in nature, as the more evidence (logs) you get the more strongly associated it will appear.
More sophisticated techniques will do dimensional analysis -- are these failed requests associated with a specific pod, availability zone, locale, software version, query string, or customer? etc. But you'd have to do so much pre-analysis, prompting and tool calls that the LLM that comprise today's AI won't provide any actual value.
Somebody didn’t math right when calculating if moving off hostedgraphite and StatsD was going to save us money or boil us alive. We moved from an inordinate number of individual stats with interpolated names to much simpler names but with cardinality and then the cardinality police showed up and kept harping on me to fix it. We were the user and customer facing portion of a SaaS company and I told them to fuck off when we were 1/7 of the overall stats traffic. I’d already reduced the cardinality by 400x and we were months past the transition date and I just wanted to work on anything that wasn’t stats for a while. Like features for the other devs or for our customers.
Very frustrating process. I suspect there’s a Missing Paper out there on how to compress stat cardinality out there somewhere. I’ve done a bit of work in that area but my efforts are in the 20% range and we need an order of magnitude. My changes were more about reducing the storage for the tags and reduced string arithmetic a bit in the process.
binarylogic•2h ago
yorwba•1h ago
otterley•41m ago
> I'm answering the question your observability vendor won't
There was no question answered here at all. It's basically a teaser designed to attract attention and stir debate. Respectfully, it's marketing, not problem solving. At least, not yet.
quadrature•8m ago
They determine what events/fields are not used and then add filters to your observability provider so you dont pay to ingest them.