frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•6m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
1•init0•13m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•13m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•16m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•18m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•29m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•29m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•34m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•38m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•39m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•41m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•42m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•45m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•56m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments
Open in hackernews

Show HN: Neurox – GPU Observability for AI Infra

https://github.com/neuroxhq/helm-chart-neurox-control
25•leeab•9mo ago

Comments

leeab•9mo ago
GPU observability is broken, so we built Neurox.

When I co-founded Mezmo (a Series D observability platform), we obsessed over logs, metrics, and traces. I learned firsthand how critical app-level observability is for DevOps, cutting through logging noise and finding the needle in the haystack is everything.

But after diving into AI infra, I noticed a huge gap: GPU monitoring in multi-cloud environments is woefully insufficient.

Despite companies throwing billions at GPUs, there's no easy way to answer basic questions:

- What's happening with my GPUs?

- Who's using them?

- How much is this project costing me?

What's happening: Metrics (like DCGM_FI_DEV_GPU_UTIL) told us what was happening, but not why. Underutilized GPUs? Maybe the pod is crashlooping, stuck pulling an image, or misconfigured, or the application is simply not using the GPU.

Who's using the compute: Kubernetes metadata such as namespace or podname gave us the missing link. We even traced issues like failed pod states, incorrect scheduling, and even PyTorch jobs silently falling back to CPU.

How much is this gonna cost: Calculating cost isn't easy either. If you're renting, you need GPU-time per pod and cloud billing data. If you're on-prem, you'll want power usage + rate cards. Neither comes from a metrics dashboard.

---

Most teams are duct-taping scripts to Prometheus, Grafana, and kubectl.

So we built Neurox - A purpose-built GPU observability platform for Kubernetes-native, multi-cloud AI infrastructure. Think:

1. Real-time GPU utilization and alerts for idle GPUs

2. Cost breakdowns per app/team/project and finops integration

3. Unified view across AWS, GCP, Azure, and on-prem

4. Kubernetes-aware: connect node metrics to running pods, jobs, and owners

5. GPU health checks

Everyone we talked to runs their compute in multi-cloud and uses Kubes as the unifier across all environments. Metrics alone aren't good enough. You gotta combine metrics with Kube state and financial data to see the whole picture.

Check us out, let us know what we're missing. Curious to hear from folks who've rolled their own, what did you do?

Lee @ Neurox

nickysielicki•9mo ago
> Everyone we talked to runs their compute in multi-cloud and uses Kubes as the unifier across all environments.

I categorically support any company willing to take a strong stance on the total irrelevance of slurm.

dharmab•9mo ago
Is your comment pro-SLURM or anti-SLURM?

I took a serious look at SLURM for my problem space and among my conclusions were:

- Hiring people who know Kubernetes is going to be far cheaper

- Kubernetes is gonna be way more compatible with popular o11y tooling

- SLURM's accounting is great if your billing model includes multiple government departments and universities each with their own grants and strict budgets, but is far more complex than needed by the typical tech company

- Writing a custom scheduler that outperforms kube-scheduler is far easier than dealing with SLURM in general

leeab•9mo ago
We're not for nor against Slurm. I do believe it has use cases in HPC, scientific and academic settings. We think our web UI is a bit easier to use and we do offer a competing scheduler.

Our focus is definitely more on container-first, cloud-native Kubernetes environments like EKS, GKE, AKS. Also we're way more health monitoring of the actual GPU hardware rather than just scheduling jobs.

firgrove•9mo ago
this feels like grafana with extra steps
leeab•9mo ago
Haha...there is some truth to that. We do use Prometheus under the hood to collect metrics. However, our thesis is that metrics alone isn’t enough. We marry metrics + kube state + cost data to get the whole picture.

Also we're purpose built to monitor GPUs, so we have things like drilling down from a Kube cluster, down to GPU nodes, down to a GPU card.

freeatnet•9mo ago
Interesting! A friend recently asked me if I knew of any tools to improve GPU observability across their deployments (primarily for cost tracking purposes, I think), but he was looking for an OSS solution. Do you plan to open source this?
leeab•9mo ago
We have considered this and may go down this route in the future. One thing we asked ourselves was what open sourcing provides. Usually it's a desire for privacy or cost in the form of self-hosting, among other reasons.

Currently, our free version is self-hosted and monitors clusters with up to 64 GPUs. We feel this will work for many use cases, especially just to try it out. Monitoring GPUs typically requires you to deploy something where your GPUs live. Since you’re already installing software on your cluster, you might as well keep your data there too.

fustercluck•9mo ago
Your Github repo says you need 120 GB of persistent storage, but our bare metal GPU clusters only have local storage. Would like to try your thing, but hosting the data with the GPUs is a pretty big blocker for us.
leeab•9mo ago
Ahh yes...here’s how you solve that. Just install the Neurox Control plane onto any regular Kubes cluster (doesn’t need GPUs, just needs persistent storage. ie: EKS, AKS, GKE, etc) without that last flag in the instructions: `--set workload.enabled=true` (<-- leave this out). More info: https://docs.neurox.com/installation/alternative-install-met...

Then on your GPU cluster w/o disk, you just need to install the Neurox Workload agent. In the Web Portal UI, click on Clusters > New Cluster and copy/paste the snippet there.

fustercluck•9mo ago
Oh sweet, I'll take a look. Thanks!
nicoslepicos•9mo ago
I've heard a few folks at events mention curiosity about stuff like this.

Given you decided to start self-hosted, are you planning on a cloud version in the next while too?

I'm curious also who you think is the right fit for this right now in terms of initial users

leeab•9mo ago
One of the reasons we went down the self-hosted route is to ensure that your data remains on your servers. Since our architecture allows for separation between where our control plane lives vs where GPU workloads run, we can definitely host the control plane portion for you. Then you just need to run our agent only on your GPU cluster. Shoot me an email: lee at neurox.com and we can discuss!
zekrioca•9mo ago
* Not open-source.
leeab•9mo ago
Someone asked about this earlier. We have considered this and may go down this route in the future. Was there something specific that you were looking for with open source? (ie: privacy, cost, etc)

Our solution is self-hosted and your data remains on your servers. And I think we do provide a fairly generous free limit of 64 GPUs.

28374654•9mo ago
Arjikh
badmonster•9mo ago
What metrics and Kubernetes runtime data does Neurox collect to provide its AI workload monitoring dashboards, and how customizable are these dashboards for different user roles like developers or finance auditors?
leeab•9mo ago
We collect a handful of metrics, but coming from our previous lives in DevOps, we only collect just what's needed to avoid unnecessary metrics bloat.

The main 3 are:

- GPU runtime stats from NVIDIA smi

- Running pods from Kube state

- Node data & events from Kube state

We have several screens with similar information intended for different roles. For example, the Workloads screen is mainly for researchers to monitor their workloads from creation to completion. The Reports screen shows mainly cost data grouped by team/project, etc.

mountainriver•9mo ago
Kubernetes is now kinda a bad abstraction for accelerated compute with the GPU shortages
leeab•9mo ago
Well, it depends on how many GPU clouds you're managing. We've talked to a bunch of companies, some startups, some enterprises and the main trend we found was the sheer number of companies with GPU clusters from multiple clouds...likely due to GPU shortage.

And yes, it's nowhere near 100% but an overwhelming majority was running Kubes for GPU workloads...mainly cuz so they'd have a unifying layer that wasn't managing each cloud separately and being proficient with their respective tooling.

Are you using something else? Slurm, docker, etc?

nickysielicki•9mo ago
Are you concerned at all that your name is one letter away from neuronx?

https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neur...

leeab•9mo ago
I've heard of AWS Neuron...didn't realize they had a package called NeuronX. Tbh, I feel like many AI companies have similar names. Maybe the guys who own neuronx.io might be more concerned...