frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
115•valyala•4h ago•19 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
52•zdw•3d ago•17 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
28•gnufx•3h ago•22 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
62•surprisetalk•4h ago•72 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
103•mellosouls•7h ago•186 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
146•AlexeyBrin•10h ago•26 comments

Tiny C Compiler

https://bellard.org/tcc/
3•guerrilla•37m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
104•vinhnx•7h ago•14 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
855•klaussilveira•1d ago•261 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1097•xnx•1d ago•620 comments

First Proof

https://arxiv.org/abs/2602.05192
71•samasblack•6h ago•51 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
10•mbitsnbites•3d ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
16•vedantnair•39m ago•9 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
65•thelok•6h ago•12 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
143•valyala•4h ago•119 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
242•jesperordrup•14h ago•81 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
522•theblazehen•3d ago•194 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
34•momciloo•4h ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
95•onurkanbkrc•9h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
15•languid-photic•3d ago•5 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
39•marklit•5d ago•6 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
51•rbanffy•4d ago•10 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
193•1vuio0pswjnm7•11h ago•282 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
261•alainrk•9h ago•434 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
619•nar001•8h ago•277 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
125•videotopia•4d ago•40 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
102•speckx•4d ago•124 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
35•sandGorgon•2d ago•16 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
213•limoce•4d ago•119 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
290•isitcontent•1d ago•38 comments
Open in hackernews

Continuous Nvidia CUDA Profiling in Production

https://www.polarsignals.com/blog/posts/2025/10/22/gpu-profiling
98•brancz•3mo ago

Comments

gnurizen•3mo ago
Author here, would be happy to field any questions or feedback!
sirhcm•3mo ago
Does the profiler read any of the GPU's performance counters? Would be super cool to have an open source tool that can capture the same data nsight compute does.
gnurizen•3mo ago
This profiler is focused on kernel execution but we do scrape high level metrics (https://www.polarsignals.com/blog/posts/2025/06/04/latest-in... which is based on https://github.com/polarsignals/gpu-metrics-agent). What performance counters in particular were you interested in?
sirhcm•3mo ago
Cache hit rate is probably the most immediately useful. Although given that this is for always-on profiling maybe this project isn't as geared towards optimizing kernels as I originally thought? In theory reading the counters should be low overhead though.
porridgeraisin•3mo ago
It depends on what counter.

[ All from my experience on home GPUs, and in lah with 2 nodes with 2 80GB H100 each. Not extensively benchmarked ]

Events like kernel launch, which this profiler reads right now, is a very small overhead (1-2%). Kernel level metrics like DRAM utilisation, cache hit rate, SM occupancy, etc usually give you a 5-10% overhead. If you want to plot a flame graph at a instruction level (mostly useful for learning purposes) then you go off the rails - even 25% overhead I have seen. And finally full traces add tons of overhead but that's pretty much expected - they anyways produce GBs of profiling data.

sirhcm•3mo ago
Occupancy and RAM utilization are available from static analysis. A sampling profiler would also obviously not be suitable for this always-on profiler case. But reading the counters [0] from the GSP should be cheap.

[0] https://en.wikipedia.org/wiki/Hardware_performance_counter

embedding-shape•3mo ago
This "low-overhead always on GPU profiler" seems really cool and useful, but we're not using Kubernetes for anything, and the instructions for how to use it seems to only include Kubernetes. Is there a way of running this without Kubernetes?
gnurizen•3mo ago
Yeah the quickstart guide covers docker, k8s and "raw" binary options:

https://www.parca.dev/docs/quickstart/

knlb•3mo ago
Thanks for the post, this is pretty cool!

I feel like I've seen Cupti have fairly high overhead depending on the cuda version, but I'm not very confident -- did you happen to benchmark different workloads with cupti on/off?

---

If you're taking feature requests: a way to subscribe to -- and get tracebacks for -- cuda context creation would be very useful; I've definitely been surprised by finding processes on the wrong gpu and being easily able to figure out where they came from would be great.

I did a hack by using LD_PRELOAD to subscribe/publish the event, but never really followed through on getting the python stack trace.

gnurizen•3mo ago
CUPTI is kind of a choose your own adventure thing, as you subscribe to more stuff the overhead goes up, this is kind of minimalist profiler that just subscribes to the kernel launches and nothing else. Still to your point depending on kernel launch frequency/granularity it may be higher overhead than some would want in production, we have plans to address that with some probabilistic sampling instead of profiling everything but wanted to get this into folks hands and get some real world feedback first.