frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
127•guerrilla•4h ago•56 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
214•valyala•8h ago•38 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
120•surprisetalk•8h ago•130 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
5•yi_wang•54m ago•0 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
48•gnufx•7h ago•50 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
145•mellosouls•11h ago•306 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
890•klaussilveira•1d ago•271 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
142•vinhnx•11h ago•16 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
169•AlexeyBrin•14h ago•30 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
77•randycupertino•3h ago•134 comments

First Proof

https://arxiv.org/abs/2602.05192
108•samasblack•10h ago•69 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
274•jesperordrup•18h ago•87 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
60•momciloo•8h ago•11 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
31•mbitsnbites•3d ago•2 comments

Show HN: Craftplan – Elixir-based micro-ERP for small-scale manufacturers

https://puemos.github.io/craftplan/
8•deofoo•4d ago•1 comments

Eigen: Building a Workspace

https://reindernijhoff.net/2025/10/eigen-building-a-workspace/
7•todsacerdoti•4d ago•2 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
89•thelok•10h ago•18 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
101•zdw•3d ago•51 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
556•theblazehen•3d ago•206 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
100•josephcsible•6h ago•121 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
175•valyala•8h ago•165 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
262•1vuio0pswjnm7•14h ago•417 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
26•languid-photic•4d ago•7 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
114•onurkanbkrc•13h ago•5 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
139•videotopia•4d ago•46 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
220•limoce•4d ago•123 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
131•speckx•4d ago•203 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
296•isitcontent•1d ago•39 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
577•todsacerdoti•1d ago•279 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
49•marklit•5d ago•10 comments
Open in hackernews

Continuous Nvidia CUDA Profiling in Production

https://www.polarsignals.com/blog/posts/2025/10/22/gpu-profiling
98•brancz•3mo ago

Comments

gnurizen•3mo ago
Author here, would be happy to field any questions or feedback!
sirhcm•3mo ago
Does the profiler read any of the GPU's performance counters? Would be super cool to have an open source tool that can capture the same data nsight compute does.
gnurizen•3mo ago
This profiler is focused on kernel execution but we do scrape high level metrics (https://www.polarsignals.com/blog/posts/2025/06/04/latest-in... which is based on https://github.com/polarsignals/gpu-metrics-agent). What performance counters in particular were you interested in?
sirhcm•3mo ago
Cache hit rate is probably the most immediately useful. Although given that this is for always-on profiling maybe this project isn't as geared towards optimizing kernels as I originally thought? In theory reading the counters should be low overhead though.
porridgeraisin•3mo ago
It depends on what counter.

[ All from my experience on home GPUs, and in lah with 2 nodes with 2 80GB H100 each. Not extensively benchmarked ]

Events like kernel launch, which this profiler reads right now, is a very small overhead (1-2%). Kernel level metrics like DRAM utilisation, cache hit rate, SM occupancy, etc usually give you a 5-10% overhead. If you want to plot a flame graph at a instruction level (mostly useful for learning purposes) then you go off the rails - even 25% overhead I have seen. And finally full traces add tons of overhead but that's pretty much expected - they anyways produce GBs of profiling data.

sirhcm•3mo ago
Occupancy and RAM utilization are available from static analysis. A sampling profiler would also obviously not be suitable for this always-on profiler case. But reading the counters [0] from the GSP should be cheap.

[0] https://en.wikipedia.org/wiki/Hardware_performance_counter

embedding-shape•3mo ago
This "low-overhead always on GPU profiler" seems really cool and useful, but we're not using Kubernetes for anything, and the instructions for how to use it seems to only include Kubernetes. Is there a way of running this without Kubernetes?
gnurizen•3mo ago
Yeah the quickstart guide covers docker, k8s and "raw" binary options:

https://www.parca.dev/docs/quickstart/

knlb•3mo ago
Thanks for the post, this is pretty cool!

I feel like I've seen Cupti have fairly high overhead depending on the cuda version, but I'm not very confident -- did you happen to benchmark different workloads with cupti on/off?

---

If you're taking feature requests: a way to subscribe to -- and get tracebacks for -- cuda context creation would be very useful; I've definitely been surprised by finding processes on the wrong gpu and being easily able to figure out where they came from would be great.

I did a hack by using LD_PRELOAD to subscribe/publish the event, but never really followed through on getting the python stack trace.

gnurizen•3mo ago
CUPTI is kind of a choose your own adventure thing, as you subscribe to more stuff the overhead goes up, this is kind of minimalist profiler that just subscribes to the kernel launches and nothing else. Still to your point depending on kernel launch frequency/granularity it may be higher overhead than some would want in production, we have plans to address that with some probabilistic sampling instead of profiling everything but wanted to get this into folks hands and get some real world feedback first.