frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Meta: Shut Down Your Invasive AI Discover Feed. Now

https://www.mozillafoundation.org/en/campaigns/meta-shut-down-your-invasive-ai-discover-feed-now/
94•speckx•49m ago•38 comments

Decreasing Gitlab repo backup times from 48 hours to 41 minutes

https://about.gitlab.com/blog/2025/06/05/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/
40•immortaljoe•39m ago•5 comments

An Interactive Guide to Rate Limiting

https://blog.sagyamthapa.com.np/interactive-guide-to-rate-limiting
45•sagyam•1h ago•15 comments

Odyc.js – A tiny JavaScript library for narrative games

https://odyc.dev
87•achtaitaipai•2h ago•13 comments

Sandia turns on brain-like storage-free supercomputer – Blocks and Files

https://blocksandfiles.com/2025/06/06/sandia-turns-on-brain-like-storage-free-supercomputer/
18•rbanffy•57m ago•1 comments

A masochist's guide to web development

https://sebastiano.tronto.net/blog/2025-06-06-webdev/
63•sebtron•2h ago•5 comments

Why Bell Labs Worked

https://links.fabiomanganiello.com/share/683ee70d0409e6.66273547
11•speckx•55m ago•4 comments

Free Gaussian Primitives at Anytime Anywhere for Dynamic Scene Reconstruction

https://zju3dv.github.io/freetimegs/
12•trueduke•1h ago•0 comments

Curate Your Shell History

https://esham.io/2025/05/shell-history
24•todsacerdoti•2h ago•15 comments

Too Many Open Files

https://mattrighetti.com/2025/06/04/too-many-files-open
11•furkansahin•1h ago•3 comments

VPN providers in France ordered to block pirate sports IPTV

https://torrentfreak.com/major-vpn-providers-ordered-to-block-pirate-sports-streaming-sites-250516/
26•gasull•51m ago•7 comments

Small Programs and Languages

https://ratfactor.com/cards/pl-small
58•todsacerdoti•2h ago•17 comments

Weaponizing Dependabot: Pwn Request at its finest

https://boostsecurity.io/blog/weaponizing-dependabot-pwn-request-at-its-finest
48•chha•5h ago•28 comments

4-7-8 Breathing

https://www.breathbelly.com/exercises/4-7-8-breathing
4•cheekyturtles•46m ago•0 comments

Deepnote (YC S19) is hiring engineers to build an AI-powered data notebook

https://deepnote.com/join-us
1•Equiet•4h ago

Self-hosting your own media considered harmful according to YouTube

https://www.jeffgeerling.com/blog/2025/self-hosting-your-own-media-considered-harmful
1285•DavideNL•11h ago•536 comments

How to (actually) send DTMF on Android without being the default call app

https://edm115.dev/blog/2025/01/22/how-to-send-dtmf-on-android
18•EDM115•4h ago•2 comments

Swift and Cute 2D Game Framework: Setting Up a Project with CMake

https://layer22.com/swift-and-cute-framework-setting-up-a-project-with-cmake
58•pusewicz•5h ago•43 comments

Top researchers leave Intel to build startup with 'the biggest, baddest CPU'

https://www.oregonlive.com/silicon-forest/2025/06/top-researchers-leave-intel-to-build-startup-with-the-biggest-baddest-cpu.html
42•dangle1•2h ago•23 comments

Ask HN: Any good tools for viewing congressional bills?

12•tlhunter•27m ago•2 comments

ThornWalli/web-workbench: Old operating system as homepage

https://github.com/ThornWalli/web-workbench
16•rbanffy•4h ago•3 comments

Jepsen: TigerBeetle 0.16.11

https://jepsen.io/analyses/tigerbeetle-0.16.11
164•aphyr•5h ago•44 comments

The impossible predicament of the death newts

https://crookedtimber.org/2025/06/05/occasional-paper-the-impossible-predicament-of-the-death-newts/
534•bdr•1d ago•178 comments

OpenAI is retaining all ChatGPT logs "indefinitely." Here's who's affected

https://arstechnica.com/tech-policy/2025/06/openai-confronts-user-panic-over-court-ordered-retention-of-chatgpt-logs/
8•Bender•1h ago•3 comments

The Coleco Adam Computer

https://dfarq.homeip.net/coleco-adam-computer/
17•rbanffy•5h ago•5 comments

Show HN: Air Lab – A portable and open air quality measuring device

https://networkedartifacts.com/airlab/simulator
436•256dpi•1d ago•177 comments

Apple warns Australia against joining EU in mandating iPhone app sideloading

https://www.neowin.net/news/apple-warns-australia-against-joining-eu-in-mandating-iphone-app-sideloading/
25•bundie•58m ago•4 comments

Tokasaurus: An LLM inference engine for high-throughput workloads

https://scalingintelligence.stanford.edu/blogs/tokasaurus/
197•rsehrlich•18h ago•23 comments

How we’re responding to The NYT’s data demands in order to protect user privacy

https://openai.com/index/response-to-nyt-data-demands/
245•BUFU•15h ago•235 comments

Test Postgres in Python Like SQLite

https://github.com/wey-gu/py-pglite
134•wey-gu•15h ago•44 comments
Open in hackernews

Show HN: ClickStack – Open-source Datadog alternative by ClickHouse and HyperDX

https://github.com/hyperdxio/hyperdx
224•mikeshi42•22h ago
Hey HN! Mike & Warren here from HyperDX (now part of ClickHouse)! We’ve been building ClickStack, an open source observability stack that helps you collect, centralize, search/viz/alert on your telemetry (logs, metrics, traces) in just a few minutes - all powered by ClickHouse (Apache2) for storage, HyperDX (MIT) for visualization and OpenTelemetry (Apache2) for ingestion.

You can check out the quick start for spinning things up in the repo here: https://github.com/hyperdxio/hyperdx

ClickStack makes it really easy to instrument your application so you can go from bug reports of “my checkout didn’t go through” to a session replay of the user, backend API calls, to DB queries and infrastructure metrics related to that specific request in a single view.

For those that might be migrating from Very Expensive Observability Vendor (TM) to something open source, more performant, and doesn’t require extensive culling of retention limits and sampling rates - ClickStack gives a batteries-included way of starting that migration journey.

For those that aren’t familiar with ClickHouse, it’s a high performance database that has already been used by companies such as Anthropic, Cloudflare, and DoorDash to power their core observability at scale due to its flexibility, ease of use, and cost effectiveness. However, this required teams to dedicate engineers to building a custom observability stack, where it’s difficult to not only get their telemetry data easily into ClickHouse but also struggling without a native UI experience.

That’s why we’re building ClickStack - we wanted to bundle an easy way to get started ingesting your telemetry data whether it’s logs & traces from Node.js or Ruby to metrics from Kubernetes or your bare metal infrastructure. Just as important we wanted our users to enjoy a visualization experience that allowed users to quickly search using a familiar lucene-like search syntax (similar to what you’d use in Google!). We recognise though, that a SQL mode is needed for the most complex of queries. We've also added high cardinality outlier analysis by charting the delta between outlier and inlier events - which we've found really helpful in narrowing down causes of regressions/anomalies in our traces as well as log patterns to condense down clusters of similar logs.

We’re really excited about the roadmap ahead in terms of improving ClickStack as a product and the ClickHouse core database to improve observability. Would love to hear everyone’s feedback and what they think!

Spinning up a container is pretty simple: `docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one` In browser live demo (no sign ups or anything silly, it runs fully in your browser!): https://play.hyperdx.io/ Landing Page: https://clickhouse.com/o11y Github Repo: https://github.com/hyperdxio/hyperdx Discord community: https://hyperdx.io/discord Docs: https://clickhouse.com/docs/use-cases/observability/clicksta...

Comments

readdit•21h ago
I like and use HyperDX in production and like it a lot. So kudos to the team for building and merging with Clickhouse. I found a lot of monetary value switching over to HyperDX considering it's significantly more cost efficient for our needs.

Should we be starting to prepare for the original HyperDX product to be deprecated and potentially move over to ClickStack?

mikeshi42•19h ago
First off, always really excited to hear from our production users - glad to hear you're getting good value out of the platform!

HyperDX isn't being deprecated, you can probably see on the marketing page it's still really prominently featured as an integral part of the stack - so nothing changing there.

We do of course want to get users onto HyperDX v2 and the overall ClickStack pattern. This doesn't mean HyperDX is going away by any means - just that HyperDX is focused a lot more on the end-user experience, and we get to leverage the flexibility, learnings and performance of a more exposed ClickHouse-powered core which is the intent of ClickStack. On the engineering side, we're working on making sure it's a smooth path for both open source and cloud.

side note: weird I thought I replied to this one already but I've been dealing with spotty wifi today :)

HatchedLake721•16h ago
Still confused where HyperDX ends and where ClickStack starts.

Is HyperDX === ClickStack?

Is ClickStack = HyperDX + something closed source?

Is ClickStack just a cloud version of HyperDX?

Is it same thing, HyperDX, rebranded as ClickStack?

mikeshi42•13h ago
This is good feedback to make things more clear :) HyperDX is part of ClickStack, so ClickStack = { HyperDX, ClickHouse, OTel }. This is the stack we recommend that will deploy in seconds and _just work_, and can scale up to PB+ and beyond as well with some additional effort (more than a few seconds unfortunately, but one day...)

HyperDX v2, the version that is now stable and shipped in ClickStack, focuses more on the querying layer. It lets users have more customization around ClickHouse (virtually any schema, any deployment).

Optionally, users can leverage other ways of getting data into ClickHouse like Vector, S3, etc. but still use HyperDX v2 on top. Previously in HyperDX v1 you _had_ to use OTel and our ingestion pipeline and our schemas. This is no longer true in v2.

Let me know if this explanation helps

3dteemu•8h ago
I'm also a bit confused. I'm using HyperDX cloud and sending telemetry directly from NextJS. What's the benefit of using ClickStack compared to HyperDX cloud?
mikeshi42•10m ago
ClickStack is currently just open source - so there's no cloud or a fully hosted offering yet! (Of course you can always pair ClickStack with ClickHouse Cloud to have your ClickHouse hosted for you).

But in this case there's probably no reason for you :) These improvements will come to our cloud offering of course as we work on rolling out upgrades from HyperDX v1 to v2 in cloud.

wvh•7h ago
What's your opinion on OTel when trying to keep things small and performant? I've got some experience working with OTel the last few years, and I'm a bit afraid of the expanding scope and complexity compared to "simpler", more targeted solutions, like for instance Vector.

I'm just asking because you mention OTel and "other ways" in your post, and you must have a good overview over the options and where the market is headed.

mikeshi42•12m ago
It's actually not clear to me that Vector is any simpler than OTel imo (VRL is way more complicated than OTTL for instance). You can also use otel collector builder (ocb) to build a slimmed binary.

My take is that OTel is overall the best investment, it's widely supported across the board by many companies and other vendors. It's also constantly being improved with interesting ideas like otel-arrow which will make it even more performant (and columnar friendly!)

We'll also continue invest in the OTel ecosystem ourselves in making it easier and easier to get started :)

That being said, I'm not saying that OTel collector is always the right choice, we want to meet users where they are. Some users have data that gets piped into S3 files and we ingest off of a S3 bucket just due to how they've collected data, some use Vector due to its flexibility with VRL, focus on logs, or specific integrations it provides out of the box. So the answer is always - it depends :) but I do like OTel and think the future is bright.

codegeek•21h ago
How are you different than Signoz, another YC company that also does Observability using clickhouse ?
oatsandsugar•20h ago
"You" here is ClickHouse
codegeek•34m ago
Yes but that is because they got acquired by Clickhouse. But my question still remains.
mikeshi42•17h ago
Echoing the comment below, I guess one obvious thing is that we are a team at ClickHouse and an official first-party product on top. That translates into:

- We're flexible on top of any ClickHouse instance, you can use virtually any schema in ClickHouse and things will still work. Custom schemas are pretty important for either tuned high performance or once you're at a scale like Anthropic. This makes it also incredibly easy to get started (especially if you already have data in ClickHouse). - The above also means you don't need to buy into OTel. I love OTel but some companies choose to use Vector, Cribl, S3, a custom writing script, etc for good reasons. All of that is supported natively due to the various ClickHouse integrations, and naturally means you can use ClickStack/HyperDX in that scenario as well. - We also have some cool tools around wrangling telemetry at scale, from Event Deltas (high cardinality correlation between slow spans and normal spans to root cause issues) to Event Patterns (clustering similar logs or spans together automatically with ML) - all of these help users dive into their data in easier ways than just searching & charting. - We also have session replay capability - to truly unify everything from click to infra metrics.

We're built to work at the 100PB+ scale we run internally here for monitoring ClickHouse Cloud, but flexible enough to pin point specific user issues that get brought up once in a support case in an end-to-end manner.

There's probably a lot more I'm missing. Ultimately from a product philosophy standpoint, we aren't big believers in the "3 pillars" concept, which tends to manifest as 3 silos/tabs for "logs", "metrics", "traces" (this isn't just Signoz - but across the industry). I'm a big believer that we're building tools to unify and centralize signals/clues in one place and giving the right datapoint at the right time to the engineer. During an incident I just think about what's the next clue I can get to root cause an issue, not if I'm in the logging product or the tracing product.

ksec•21h ago
It would have even much better if the link was pointing to https://github.com/hyperdxio/hyperdx the actual source code.

Because right now without the message on HN here, I wouldn't know what "open source observability stack" meant when the webpage does not explain what HyperDX is, nor does it provide a link to it or its code. I was expecting the whole thing "Open Source Datadog" to be ClickStack Repo inside Clickhouse Github. Which is not found anywhere.

But other than that congrats!. I have long wondered why no one has built anything on top of Clickhouse for Datadog / New Relic competition.

The Clickhouse DB opened up the ocean of open source "Scalable" Web Analytics that wont previously available or possible. I am hoping we see this change again to observability platform as well.

mikeshi42•20h ago
Hey that's a good point on the link! Not something I can change now unfortunately, I was hoping having it near the top of the text post would help too for those that wanted to dig in more :)

That being said - as you've mentioned so many different "store tons of data" apps have been enabled from ClickHouse. Observability is at a point where it's in the same category of: ClickHouse can store a ton of data, OTel can help you collect/process it, and now we just need that analytics user experience layer to present it to the engineers that need an intuitive way to dive in to it all.

sirfz•20h ago
SigNoz is a dd/nr alternative built on clickhouse that I know of
cbhl•18h ago
Looks like it is pointing there now; old link was https://clickhouse.com/use-cases/observability for posterity
ankit01-oss•13h ago
check out SigNoz: https://github.com/SigNoz/signoz

We started building signoz as an OS alternative of Datadog/New Relic four years back and opentelemetry-native from day 1. We have shipped some good features on top of Opentelemetry and because of OTel's semantic conventions & our query builder, you can correlate any telemetry across signals.

hosh•20h ago
I liked Otel for traces and maybe logging -- but I think the Otel metrics is over-engineered.

Does ClickStack have a way to ingest statsd data, preferably with Datadog extensions (which adds tagging)?

Does ClickStack offer correlations across traces, logging, and metrics via unified service tagging? Does the UI offer the ability to link to related traces, logging, and metrics?

Why does the Elixir sdk use the hyperdx library instead of the otel library?

Are Notebooks in the roadmap?

phillipcarter•20h ago
> but I think the Otel metrics is over-engineered.

What about OTel metrics is difficult?

You can set up receivers for other metrics sources like stasd or even the DD agent, so there's no need to immediately replace your metrics stack.

carefulfungi•19h ago
My foray into otel with aws lambda was not a success (about 6 months ago). Many of my issues were with the prom remote writer that I had to use. The extension was not reliable. Queue errors were common in the remote writer. Interop with Prometheus labels was bad. And the various config around delta and non-delta metrics was a bit of a mess. The stack I was using at least didn’t support exponential histograms. Got it to work mostly after days of fiddling but never reliably. Ripped it out and was happier. Maybe a pure OTEL stack would have been a much better experience than needing the prom remote writer - which I’d like to try in the future.

I’d certainly appreciate hearing success stories of OTEL + serverless.

cyberax•15h ago
One critical problem for me: no support for raw metrics.

Sometimes, you just want to export ALL of your metrics to the server and let it deal with histograms, aggregation, etc.

Another annoyance is the API, you can't just put "metrics.AddMeasurement('my_metric', 11)", you have to create a `Meter` (which also requires a library name), and then use it.

mikeshi42•19h ago
Great questions!

OTel Metrics: I get it, it's specified as almost a superset of everyone's favorite metric standards with config for push/pull, monotonic vs delta, exponential/"native" histograms, etc. I have my preferences as well which would be a subset of the standard but I get why a unifying standard needed to be flexible.

Statsd: The great thing about the OTel collector is that it allows ingesting a variety of different data formats, so you can take in statsd and output OTel or write directly to ClickHouse: https://github.com/open-telemetry/opentelemetry-collector-co...

We correlate across trace/span id as well as resource attributes. The correlation across logs/traces with span/trace id is a pretty well worn path across our product. Metrics to the rest is natively done via resource attributes and we primarily expose correlation for K8s-based workloads with more to come. We don't do exemplars _yet_ to solve the more generic correlation case for metrics (though I don't think statsd can transmit exemplars)

Elixir: We try to do our best to support wherever our users are, the OTel SDK and ours have continued to change in parallel over time - we'll want to likely re-evaluate if we should start pointing towards the base OTel SDK for Elixir. We've been pretty early on the OTel SDK side across the board so things continue to evolve, for example our Deno OTel integration came out I think over a year before Deno officially launched one with native HyperDX documentation <3

Notebooks: Yes, it should land in an experimental state shortly, stay tuned :) There's a lot of exciting workflows we're looking to unlock with notebooks as well. If you have any thoughts in this direction, please let me know. I'd love to get more user input ahead of the first release.

hosh•12h ago
Thank you. I saw a different thread about Otel statsd receiver, so that works out better. The last time I had looked into it, the otel metrics specs were very complex.

I think this is enough features for me to seriously take a look at it as a Datadog alternative.

user3939382•20h ago
There’s so many of these log aggregators I’ve completely lost track. I used Datadog extensively and found it overpriced and a very confusing UI.
RhodesianHunter•20h ago
That's what happens when there's a need for something.

You see an explosion in offerings, and then eventually it's whittled down to a handful of survivors.

secondcoming•17h ago
Everyone has found Datadog to be overpriced!

So they switch to Prometheus and Grafana and now have to manage a Prometheus cluster. Far cheaper, but far more annoying.

wvh•6h ago
I have no experience with Datadog, but I'm not sure "cheaper" is an easy adjective to quantify. The whole metrics/logs/traces thing in Kubernetes is still painful, a lot of work and there's no end to the confusion. After several years in the trenches, it still takes me longer (i.e. more money) to install, configure and make sense of a monitoring stack than to set up the software it is monitoring.

It doesn't help that typically most software is ancient, spits out heaps of stack traces and wall-of-text output, doesn't use structured logging and generally doesn't let itself be monitored easily.

So yeah, getting meaningful insights from a highly available observability stack will take some serious time and resources, and I can understand smaller companies just handing it over to a third party so they can get on with their core business (AKA easy billing).

landl0rd•15h ago
Datadog is a good product but one of the most blatantly overpriced things I’ve had the displeasure to use.
Immortalin•20h ago
I remember back in the day Mike was building Huggingface before Huggingface was a thing. He was ahead of his time. It's a pity model depot is no longer around.
mikeshi42•18h ago
Wow this is an incredible throwback! Can't believe your memory is this good. It's quite funny and I totally agree - I met the Gradio founders in an accelerator (when they were just getting started) after we shut down ModelDepot - and they of course ended up getting acquired into Hugging Face. It's funny how things end up sometimes :)
bilalq•20h ago
This is really interesting.

Is Clickhouse the only stateful part of this stack? Would love to see compatbility with Rotel[0], a Rust implementation of the OTEL collector, so that this becomes usable for serverless runtime environments.

One key thing Datadog has is their own proprietary alternative to the OTEL collector that is much more performant.

[0]: https://github.com/streamfold/rotel

mikeshi42•20h ago
I agree - rotel seems like a really good fit for a lightweight lambda integration for OTel, it of course should work already since we stand up an OTel ingest endpoint so it should be seamless to send data over! (Kind of the beauty of OTel of course)

I've also been in touch with Mike & Ray for a bit, who've told me they've added ClickHouse support recently which makes the story even better :)

mike_heffner•17h ago
Hi all — one of the authors of Rotel here. Thanks for the kind words, Bilal and Michael.

We're excited to test our Clickhouse integration with Clickstack, as we believe OTel and Clickhouse make for a powerful observability stack. Our open-source Rust OpenTelemetry collector is designed for high-performance, resource-constrained environments. We'd love for you to check it out!

smetj•9h ago
wow didn't know about rotel ... looks very interesting indeed. Especially those python bindings ... Bookmarked!
buserror•19h ago
I am absolutely amazed at the amount of garbage being "logged", enough that it is not just a huge business, but also one of the primary task for some devops guys. It's like a goal in itself, you have a look at the output and it is absolutely scary, HUGE messages being "logged" for purpose unknown.

I've seen single traces over 100KB of absolute pure randomness encoded as base64... Because! Oh and also, we have to pay for the service, so it looks important.

Sure they tell you it is super helpful for debugging issues, but in a VERY large proportion of cases, it is 1) WAY too much, and 2) never used anyway. And most of the time what's interesting is the last 10 minutes of the debug version, you don't need a "service" for that.

/me gets down his horse :-)

metta2uall•11h ago
I think you're at least partially right - not everything but a lot of data is not useful - wasting money, bandwidth, electricity, etc. There should be more dynamic controls over what gets logged/filtered at the client-side..
smetj•9h ago
I totally agree with this. Same for metrics.
SOLAR_FIELDS•17h ago
Comparison to the other player in this space, Signoz? Also uses clickhouse as backend
atombender•17h ago
I'm looking for a new logging solution to replace Kibana. I have very good experience with ClickHouse, and HyperDX looks like a decent UI for it.

I'm primarily interested in logs, though, and the existing log shipping pipeline is around Vector on Kubernetes. Admittedly Vector has an OTel sink in beta, but I'm curious if that's the best/fastest way to ship logs, especially given that the original data comes out of apps as plain JSON rather than OTel.

The current system is processing several TB/day and needs fairly serious throughput to keep up.

mikeshi42•17h ago
Luckily ClickHouse and serious throughput are pretty synonymous. Internally we're at 100+PB of telemetry stored in our own monitoring system.

Vector supports directly writing into ClickHouse - several companies use this at scale (iirc Anthropic does exactly this, they spoke about this recently at our user conference).

Please give it a try and let us know how it goes! Happy to help :)

atombender•17h ago
Thanks! Very familiar with ClickHouse, but can logs then be ingested into CH without going through HyperDX? Doesn't HyperDX require a specific schema that the Vector pipeline would have to adapt the payloads to?
mikeshi42•17h ago
Nope! We're virtually schema agnostic, you can map your custom schema to observability concepts (ex. the SQL expression for TraceID, either a column or a full function/expression will work).

We don't have any lock in to our ingestion pipeline or schema. Of course we optimize a lot for the OTel path, but it works perfectly fine without it too.

atombender•17h ago
That's great to hear. I will take a closer look ASAP.
smetj•9h ago
I think settling to otel as transport/wire-format is an excellent strategic choice offering most possibilities towards the future. Two concerns less.
atombender•5h ago
I'm less concerned about the wire format than reducing complexity and bottlenecks in a high-volume, high-throughput system. Needing an intermediate API just to ingest into ClickHouse adds another step where things can slow down or break, not to mention that a gRPC API just to convert JSON payloads into INSERTs is quite wasteful if you can just insert directly.
ensignavenger•17h ago
Really interesting, Unfortunately, it looks like HyperDX depends on Mongo? I wonder if there are any open source document stores (possibly a mongo compatible one)( that could work with it?
mikeshi42•17h ago
In theory you should be able to try using FerretDB for example.

We have this on the medium term roadmap to investigate proper support for a compatibilty layer such as ferret or more likely just using ClickHouse itself as the operational data store.

ptrfarkas•17h ago
FerretDB maintainer here - we'll be looking at this
mikeshi42•16h ago
That'd be awesome! Ferret has been on my radar for a while now :) If you want to chat with us on Discord: https://hyperdx.io/discord
wrn14897•10h ago
Hey, I'm a maintainer of HyperDX. I'd love to chat with you about a potential collaboration. we're planning to migrate off mongodb. Please reach out to me on Discord (warren)
ensignavenger•15h ago
FerretDB looks like a great alternative, thanks! I'll be keeping Ferret and ClickStack on my radar!
ah27182•13h ago
Do i need to sign-in when using the docker container?
mikeshi42•12h ago
There's a version that we call local mode which is intended for engineers using it as part of their local debugging workflow: https://clickhouse.com/docs/use-cases/observability/clicksta...

Otherwise yes you can authenticate against the other versions with a email/password (really the email doesn't do anything in the open source distribution, just a user identifier but we keep there to be consistent)

theogravity•11h ago
This is really cool considering how expensive DataDog can get. I'm the author of LogLayer (https://loglayer.dev), which is a structured logger for TypeScript that allows you to use multiple loggers together. I've written transports that allows shipping to other loggers like pino and cloud providers such as DataDog.

I spent some time writing an integration for HyperDX after seeing this post and hope you can help me roll it out! Would love to add a new "integrations" section to my page that links to the docs on how to use HyperDX with LogLayer.

https://github.com/hyperdxio/hyperdx-js/pull/184

wrn14897•9h ago
Hey this looks awesome! We will take a look at it
gigatexal•11h ago
Datadog is expensive this is true. But I have never felt it be slow. Speed is not its killer feature. It’s everything you can do with it once you have logs and or metrics flowing into it.

The dashboards and their creation are intuitive. Creating alerts and things from airflow logs is easy using their DSL. Connecting and sending notifications to things like slack just works tm.

So this is how we justify the datadog costs because of all the engineering time (engineers are still expensive, ai hasn’t replaced us yet) it saves and how quickly we can move from raw logs and metrics to useful insights.

mikeshi42•9h ago
Totally agree - you use an observability tool because it answers your questions quickly, not just return searches quickly.

Beyond raw performance and cost effectiveness, which is quite important at scale, we work a lot on making sure the application layer itself is intuitive to use. You can always play around with what ours looks like at play.hyperdx.io :)

wiradikusuma•9h ago
Can I say it's similar to Signoz, in that both are ClickHouse-powered and available as both open-source and hosted versions? How are you guys different compared to Signoz?

(The UI looks similar too, although I guess a lot of observability tools seem to adopt that kind of UI).

oulipo•7h ago
Interested in a comparison between both too!
regnerba•9h ago
We run a full Grafana stack (Loki, tempo, Prometheus, alloy agent, Grafana) and back out with self hosted S3 (we are all onprem physical hardware).

While I do like the stack we have, it is a lot of components to run and configure. Don’t think we have ever had any issues once it was up and running.

Does anyone have any thoughts about how this compares? We don’t have a huge amount of days, 1 month of metrics is about 200GB and logs isn’t a whole lot more, less than a TB I think for 2 weeks.

JimDabell•7h ago
I’m not sure what this is intended to do, but when I created an account, I saw in the left sidebar a widget saying “Was this search result helpful?” with thumbs up and thumbs down buttons. I hadn’t searched for anything. I pressed the “Hide” button instead, and the widget changed to an “Any feedback?” button. I thought I would tell you about this weird bug, so I clicked the feedback button. The widget changed back into the “Was this search result helpful?” widget.

I found the UX very difficult to read. The monospace font, the unusually small text, the bold white and bright green text on a dark background… I found it a little more readable by changing the font to system-ui, but not by much. Please consider a more traditional style instead of leaning into the 80s terminal gimmick. This factor alone makes me want to not use it. It needs to be easy to read, not a pain to read.

dustedcodes•4h ago
Very cool, reminds me of SigNoz.

How would I self host this in k8s? Would I deploy a ClickHouse cluster using the Altinity operator and then connect it using the HyperDX local mode or what is the recommended approach to self-host ClickStack?