frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

OpenTelemetry Is Great, but Who the Hell Is Going to Pay for It?

https://www.adatosystems.com/2025/02/10/who-the-hell-is-going-to-pay-for-this/
43•thunderbong•6h ago

Comments

denysvitali•6h ago
I don't think the comparison is correct. For sure OTEL adds some overhead, but if you're ingesting raw JSON data, then even with the overhead it's probably going to be reduced since internally the system talks OTLP - which is often (always?) encoded with protobuf and most of the time sent over via gRPC.

It's then obviously your receiver end's job to take the incoming data and store it efficiently - grouping it by resource attributes for example (since you probably don't want to store 10 times the same metadata). But especially thanks to the flexibility of adding all the surrounding metadata (rather than just shipping the single log line), you can do magic thinks like routing metrics to different tenants / storage classes or drop them.

Having said that, OTEL is both a joy and an immense pain to work with - but I still love the project (and still hate the fact that every release has breaking changes and 4 different version identifiers).

Btw, one of the biggest win in the otel-collector would be to use the new Protobuf Opaque API as it will most likely save lots of CPU cycles (see https://github.com/open-telemetry/opentelemetry-collector/is...) - PRs are always welcome I guess.

maplemuse•5h ago
The part about SNMP made me laugh. I remember integrating SNMP support into an early network security monitoring tool about 25 years ago, and how it seemed clunky at the time. But it's continued to work well, and be supported all these years. It was a standard, but with very broad tool support, so you weren't locked into a particular vendor.
blinded•5h ago
smmp-exporter ftw
rbanffy•5h ago
And, for a lot of things, it's quite sufficient.

I used Munin a lot as well in the 2005-2010 timeframe. Still do as a backup (for when Prometheus, Grafana, and Influxdb conspire against me) on my home lab.

Usually the 15 minute collection interval is just fine. One time though I had an issue with servers that were just fine and, then, crashed and rebooted with no useful metrics collected between the last "I'm fine" and the first "I'm fine again".

At that point we started collecting metrics (for only those servers) every 5 seconds, and we figured out someone introduced a nasty bug that took a couple weeks of uptime to run out of its own memory and crash everything. It was a fun couple days.

dboreham•5h ago
Uhhh. The point of OTel is that you can host it yourself. And should do imho unless you're part of a VC money laundering scheme where they want to puff up NR or DD or whoever portfolio company numbers.
rbanffy•4h ago
You should always think about how much it'll cost for you to roll out and maintain something vs how much it would cost to buy the service from a vendor.

Chances are your volumes are low enough it will be actually cheaper to run with something like New Relic or Datadog. When the monthly bill starts reaching 10% of what a dedicated person would cost, it's time to plan your move to self-hosted.

mdaniel•1h ago
> it's time to plan your move to self-hosted.

No, it's always time to plan the move to self hosted, and just occasionally choose someone else to be the "self." Because once a proprietary vendor gets in the stack, evicting them is going to be a project

I'm aware that this doesn't split cleanly down the "saas only feature" or the evil "rug pull" axes, but I'd much rather say "I legitimately tried to allow us to eject from the walled garden and the world changed" versus "whaddya mean non-Datadog?"

jsight•4h ago
In my experience, the people willing to pay the most to not host it themselves are often the big companies that are long past VC money.

They'll gladly pay someone to do it and have a big team of engineers and planners to support the outsourcing.

Efficiency isn't what bigco inc is about.

xyzzy123•3h ago
BigCos have seen teams come and go, whole departments slaughtered by reorgs. They have seen weird policy changes, political battles, personal beefs and bad managers that trigger waves of attrition.

They know that even if you have the capacity to run something internally today, that is a delicate state of affairs that could easily change tomorrow.

hermanradtke•5h ago
New Relic, Datadog, etc are selling their original offering but now with otel marketing.

I encourage the author to read the honeycomb blog and try to grok what makes otel different. If I had to sum it up in two points:

- wide rows with high cardinality

- sampling

stego-tech•5h ago
Excellent critique of the state of observability, especially for us IT folks. We’re often the first - and last, until the bills come - line of defense for observability in orgs lacking a dedicated team. SNMP Traps get us 99% of the way there with anything operating in a standard way, but OTel/Prometheus/New Relic/etc all want to get “in the action” in a sense, and hoover up as much data points as possible.

Which, sure, if you’re willing to pay for it, I’m happy to let you make your life miserable. But I’m still going to be the Marie Kondo of IT and ask if that specific data point brings you joy. Does having per-second interval data points actually improve response times and diagnostics for your internal tooling, or does it just make you feel big and important while checking off a box somewhere?

Observability is a lot like imaging or patching: a necessary process to be sure, but do you really need a Cadillac Escalade (New Relic/Datadog/etc) to go to the grocery store when a Honda Accord (self-hosted Grafana + OTel) will do the same job more efficiently for less money?

Honestly regret not picking the Observability’s head at BigCo when I had the chance. What little he showed me (self-hosted Grafana for $90/mo in AWS ECS for the corporate infrastructure of a Fortune 50? With OTel agents consuming 1/3 to 1/2 the resources of New Relic agents? Man, I wish I had jumped down that specific rabbit hole) was amazingly efficient and informative. Observation done right.

rbanffy•4h ago
> But I’m still going to be the Marie Kondo of IT and ask if that specific data point brings you joy.

There seems to be a strong "instrument everything" culture that, I think, misses the point. You want simple metrics (machine and service) for everything, but if your service gets an error every million requests or so, it might be overkill to trace every request. And, for the errors, you usually get a nice stack dump telling you where everything went wrong (and giving you a good idea of what was wrong).

At that point - and only at that point, I'd say it's worth to TEMPORARILY add increased logging and tracing. And yes, it's OK to add those and redeploy TO PRODUCTION.

Nextgrid•4h ago
> but do you really need a Cadillac Escalade (New Relic/Datadog/etc) to go to the grocery store

Depends if your objective is to go to the grocery store or merely showing off going to the grocery store.

During the ZIRP era there was a financial incentive for everyone to over-engineer things to justify VC funding rounds and appear "cool". Business profitability/cost-efficiency was never a concern (a lot of those business were never viable and their only purpose was to grift VC money and enjoy the "startup founder" lifestyle).

Now ZIRP is over, but the people who started their career back then are still here and a lot of them still didn't get the memo.

stego-tech•4h ago
> During the ZIRP era there was a financial incentive for everyone to over-engineer things to justify VC funding rounds and appear "cool".

Yep, and what’s worse is that…

> Now ZIRP is over, but the people who started their career back then are still here and a lot of them still didn't get the memo.

…folks let go from BigTech are filtering into smaller orgs, and the copy-pasters and “startup lyfers” are bringing this attitude with them. I guess I got lucky enough to start my interest in tech before the dotcom crash, my career just before the 2008 crash, and finished my BigTech tenure just after COVID (and before the likely AI crash), and thus am always weighing the costs versus the benefits and trying to be objective.

Nextgrid•3h ago
> folks let go from BigTech are filtering into smaller orgs, and the copy-pasters and “startup lyfers” are bringing this attitude with them

Problem is, not all of them are even doing this intentionally. A lot actually started their career during that clown show, so for them this is normal and they don't know any other way.

stego-tech•1h ago
Yeah, very true, and those of us with more life and career experience (it me) have a societal contract of sorts to teach and lead them out of bad habits or design choices. If we don’t show them a better path forward, they’ll have to suffer to seek it out just like we had to.
mping•4h ago
On paper this looks smart, but when you hit a but that triggers under very specific conditions (weird bugs happen more often as you scale), you are gonna wish you had tracing for that.

The ideal setup is that you trace as much for some given time frame, if your stack supports compression and tiered storage it becomes cheap er

prymitive•4h ago
> There seems to be a strong "instrument everything" culture

Metrics are the easiest way to simply expose your application internal state and then, as a maintainer of that service, you’re in nirvana. And even if you don’t go that far you’re likely to be an engineer writing code and when it comes time to add some metrics why wouldn’t you add more rather than less, and once you have all of them why not adding all possible labels? And in the meantime your Prometheus server is in a crash loop because it run if of RAM, but that’s not a problem visible to you. Unfortunately there’s a big gap in understanding between a code editor writing instrumentation code and the effect in resource usage on the other end of your observability pipeline.

sshine•2h ago
I can only say, I tried to add massive amounts of data points to a fleet of battery systems once; 750 cells per system, 8 metrics per cell, one cell every 20 ms. It became megabits per second, so we only enabled it when engaging the batteries. But the data was worth it, because we could do data modelling on live events in retrospect when we were initially too busy fixing things. Observability is a super power.
baby_souffle•1h ago
This right here! Don't be afraid to over instrument. You can always down Rez or even just basic statistical sampling before you actually commit your measurements to Time series database.

As annoying as that may sound, it's a hell of a lot harder to go back in time to observe that bizarre intermittent issue...

jsight•4h ago
>Observability is a lot like imaging or patching: a necessary process to be sure, but do you really need a Cadillac Escalade (New Relic/Datadog/etc) to go to the grocery store when a Honda Accord (self-hosted Grafana + OTel) will do the same job more efficiently for less money?

The way that I've seen it play out is something like this:

  1. We should self host something like Grafana and otel.
  2. Oh no, the teams don't want to host individual instances of that, we should centralize it!
    (2b - optional, but common, Random team gets saddled with this job)
  3. Oh no, the centralized team is struggling with scaling issues and the service isn't very reliable. We should outsource it for 10x the cost!
This will happen even if they have a really nice set of deployment infrastructure and patterns that could have allowed them to host observability at the team level. It turns out, most teams really don't need the Escalade, they just need some basic graphs and alerts.

Self hosting needs to be more common within organizations.

baby_souffle•1h ago
Another variant of step 2: some individual with a little bit of political Capital sees something new and shiny and figures out how to be the first project internally to use influx, for example, over Prometheus... And now you have a patchwork of dashboards, each broken in their own unique way...
pojzon•4h ago
Difference between OTel and other previous standards is that OTel was created by “modern” engineers that dont care about resource consumption or dont even understand it. Which is funny because thats what the tool is about.

So yea, cost of storage and network traffic is only going to balloon.

There is room for improvements and I can already see new projects that will most likely gain traction in upcoming years.

growse•4h ago
One of the biggest fallacies I see in this space is people looking at an observability standard like otel and thinking "I must enable all of that".

You really don't have to.

Throw away traces. Throw away logs. Sample those metrics. The standard gives you capabilities, it doesn't force you to use them. Tune based on your risk appetite, constraints, and needs.

My other favourite retort to "look how expensive the observability is" is "have you quantified how expensive not having it is". But I reserve that one for obtuse bean counters :)

cortesoft•4h ago
Setting up a self hosted prometheus and grafana stack is pretty trivial when starting out. I run a Cortex cluster handling metrics for 20,000 servers, and it requires very little maintenance.

Self-hosting metrics at any scale is pretty cost effective.

ramon156•3h ago
All I want is spanned logs in JS. Why do I need OTEL? Why can't pino do this for me?
ris•3h ago
The logging examples given don't appear to be too different to what any structured & annotated logging mechanism would give you. On top of that it's normally encoded with grpc, so that's already one-up on basic json-encoded structured logs.

The main difference I see with otel is the ability to repeatedly aggregate/decimate/discard your data at whatever tier(s) you deem necessary using opentelemetry-collector. The amount of data you end up with is up to you.

sheerun•1h ago
Trapped in his time I see

Claude Code now supports Hooks

https://docs.anthropic.com/en/docs/claude-code/hooks
51•ramoz•52m ago•11 comments

Xfinity using WiFi signals in your house to detect motion

https://www.xfinity.com/support/articles/wifi-motion
230•bearsyankees•5h ago•162 comments

The new skill in AI is not prompting, it's context engineering

https://www.philschmid.de/context-engineering
325•robotswantdata•4h ago•189 comments

I write type-safe generic data structures in C

https://danielchasehooper.com/posts/typechecked-generic-c-data-structures/
216•todsacerdoti•7h ago•83 comments

There are no new ideas in AI only new datasets

https://blog.jxmo.io/p/there-are-no-new-ideas-in-ai-only
291•bilsbie•10h ago•153 comments

The hidden JTAG in a Qualcomm/Snapdragon device’s USB port

https://www.linaro.org/blog/hidden-jtag-qualcomm-snapdragon-usb/
112•denysvitali•6h ago•16 comments

Donkey Kong Country 2 and Open Bus

https://jsgroth.dev/blog/posts/dkc2-open-bus/
187•colejohnson66•9h ago•44 comments

Entropy of a Mixture

https://cgad.ski/blog/entropy-of-a-mixture.html
25•cgadski•3h ago•2 comments

The Original LZEXE (A.K.A. Kosinski) Compressor Source Code Has Been Released

https://clownacy.wordpress.com/2025/05/24/the-original-lzexe-a-k-a-kosinski-compressor-source-code-has-been-released/
49•elvis70•5h ago•3 comments

End of an Era

https://www.erasmatazz.com/personal/self/end-of-an-era.html
67•marcusestes•5h ago•15 comments

Jim Boddie codeveloped the first successful DSP at Bell Labs

https://spectrum.ieee.org/dsp-pioneer-jim-boddie
13•jnord•2h ago•0 comments

GPEmu: A GPU emulator for rapid, low-cost deep learning prototyping [pdf]

https://vldb.org/pvldb/vol18/p1919-wang.pdf
14•matt_d•2h ago•0 comments

Melbourne man discovers extensive model train network underneath house

https://www.sbs.com.au/news/article/i-was-shocked-melbourne-mans-unbelievable-find-after-buying-house/m4sksfer8
39•cfcfcf•1h ago•13 comments

Beneath the canopy: Pioneering satellite reveals rainforests' hidden worlds

https://www.bbc.co.uk/news/resources/idt-d7353b50-0fea-46ba-8495-ae9e25192cfe
4•ZeljkoS•2d ago•0 comments

Show HN: TokenDagger – A tokenizer faster than OpenAI's Tiktoken

https://github.com/M4THYOU/TokenDagger
244•matthewolfe•12h ago•66 comments

Price of rice in Japan falls below ¥4k per 5kg

https://www.japantimes.co.jp/news/2025/06/24/japan/japan-rice-price-falls-below-4000/
64•PaulHoule•4h ago•85 comments

They don't make 'em like that any more: Sony DTC-700 audio DAT player/recorder

https://kevinboone.me/dtc-700.html
71•naves•6h ago•60 comments

Creating fair dice from random objects

https://arstechnica.com/science/2025/05/your-next-gaming-dice-could-be-shaped-like-a-dragon-or-armadillo/
29•epipolar•2d ago•9 comments

People Keep Inventing Prolly Trees

https://www.dolthub.com/blog/2025-06-03-people-keep-inventing-prolly-trees/
15•lifty•2d ago•2 comments

Show HN: New Ensō – first public beta

https://untested.sonnet.io/notes/new-enso-first-public-beta/
212•rpastuszak•13h ago•81 comments

A CarFax for Used PCs; Hewlett Packard wants to give old laptops new life

https://spectrum.ieee.org/carmax-used-pcs
62•rubenbe•8h ago•65 comments

14.ai (YC W24) hiring founding engineers in SF to build a Zendesk alternative

https://14.ai/careers
1•michaelfester•7h ago

Ask HN: What Are You Working On? (June 2025)

355•david927•1d ago•1113 comments

The provenance memory model for C

https://gustedt.wordpress.com/2025/06/30/the-provenance-memory-model-for-c/
198•HexDecOctBin•15h ago•106 comments

Ask HN: What's the 2025 stack for a self-hosted photo library with local AI?

139•jamesxv7•6h ago•70 comments

The Plot of the Phantom, a text adventure that took 40 years to finish

https://scottandrew.com/blog/2025/06/you-can-now-play-plot-of-the-phantom-the-text-adventure-game/
174•SeenNotHeard•3d ago•34 comments

Jacobi Ellipsoid

https://en.wikipedia.org/wiki/Jacobi_ellipsoid
25•perihelions•2d ago•4 comments

New proof dramatically compresses space needed for computation

https://www.scientificamerican.com/article/new-proof-dramatically-compresses-space-needed-for-computation/
177•baruchel•3d ago•92 comments

Public Signal Backups Testing

https://community.signalusers.org/t/public-signal-backups-testing/69984
23•blendergeek•5h ago•2 comments

Show HN: We're two coffee nerds who built an AI app to track beans and recipes

https://beanbook.app
36•rokeyzhang•6h ago•24 comments