frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Zerostack – A Unix-inspired coding agent written in pure Rust

https://crates.io/crates/zerostack/1.0.0
430•gidellav•13h ago•199 comments

Mozilla to UK regulators: VPNs are essential privacy and security tools

https://blog.mozilla.org/netpolicy/2026/05/15/mozilla-to-uk-regulators-vpns-are-essential-privacy...
282•WithinReason•5h ago•88 comments

A nicer voltmeter clock

https://lcamtuf.substack.com/p/a-nicer-voltmeter-clock
210•surprisetalk•12h ago•27 comments

Hosting a website on an 8-bit microcontroller

https://maurycyz.com/projects/mcusite/
150•zdw•10h ago•13 comments

Colossus: The Forbin Project

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
118•doener•2d ago•39 comments

OpenAI and Government of Malta partner to roll out ChatGPT Plus to all citizens

https://openai.com/index/malta-chatgpt-plus-partnership/
212•bookofjoe•15h ago•251 comments

Moving away from Tailwind, and learning to structure my CSS

https://jvns.ca/blog/2026/05/15/moving-away-from-tailwind--and-learning-to-structure-my-css-/
567•mpweiher•1d ago•322 comments

Playing Atari ST Music on the Amiga with Zero CPU

https://arnaud-carre.github.io/2026-05-15-ym-fast-emu/
48•z303•3h ago•15 comments

Prolog Basics Explained with Pokémon

https://unplannedobsolescence.com/blog/prolog-basics-pokemon/
20•birdculture•2d ago•1 comments

SANA-WM, a 2.6B open-source world model for 1-minute 720p video

https://nvlabs.github.io/Sana/WM/
349•mjgil•23h ago•138 comments

Roman Letters

https://romanletters.org/
35•diodorus•2d ago•8 comments

We've made the world too complicated

https://user8.bearblog.dev/the-world-is-too-complicated/
313•James72689•1d ago•303 comments

Illusions of understanding in the sciences

https://link.springer.com/article/10.1007/s42113-026-00271-1
52•sebg•2d ago•24 comments

The Third Hard Problem

https://mmapped.blog/posts/48-the-third-hard-problem
89•surprisetalk•2d ago•45 comments

Accelerando (2005)

https://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html
303•eamag•1d ago•170 comments

MCP Hello Page

https://www.hybridlogic.co.uk/blog/2026/05/mcp-hello-page
109•Dachande663•13h ago•36 comments

Frontier AI has broken the open CTF format

https://kabir.au/blog/the-ctf-scene-is-dead
387•frays•1d ago•392 comments

δ-mem: Efficient Online Memory for Large Language Models

https://arxiv.org/abs/2605.12357
220•44za12•1d ago•58 comments

Halt and Catch Fire

https://unstack.io/halt-and-catch-fire
145•ScottWRobinson•17h ago•80 comments

Twilight of the Velocipede: Typesetting Races Before the Age of Linotype

https://publicdomainreview.org/essay/twilight-of-the-velocipede/
16•benbreen•14h ago•0 comments

Why did Clovis toolmakers choose difficult quartz crystal?

https://phys.org/news/2026-04-clovis-toolmakers-difficult-quartz-crystal.html
27•PaulHoule•2d ago•15 comments

Unknowable Math Can Help Hide Secrets

https://www.quantamagazine.org/how-unknowable-math-can-help-hide-secrets-20260511/
55•Xcelerate•3d ago•11 comments

A molecule with half-Möbius topology

https://www.science.org/doi/10.1126/science.aea3321
100•bryanrasmussen•4d ago•7 comments

Self-Distillation Enables Continual Learning [pdf]

https://arxiv.org/abs/2601.19897
67•teleforce•10h ago•17 comments

C++26 Shipped a SIMD Library Nobody Asked For

https://lucisqr.substack.com/p/c26-shipped-a-simd-library-nobody
147•signa11•2d ago•107 comments

3D Gaussian Splatting in a Weekend

https://bfeldman.me/3dgs-weekend/
101•b__feldman•3d ago•10 comments

Show HN: Rocksky – Music scrobbling and discovery on the AT Protocol

https://tangled.org/rocksky.app/rocksky
87•tsiry•18h ago•38 comments

I believe there are entire companies right now under AI psychosis

https://twitter.com/mitchellh/status/2055380239711457578
1999•reasonableklout•1d ago•1174 comments

Content-defined chunking added to Bazel

https://www.buildbuddy.io/blog/content-defined-chunking/
55•siggi•3d ago•5 comments

Greek Alphabet Cards

https://labs.randomquark.com/alphabet_cards/
131•ricochet11•23h ago•59 comments
Open in hackernews

Understanding the Go Scheduler

https://nghiant3223.github.io/2025/04/15/go-scheduler.html
180•gnabgib•12mo ago

Comments

90s_dev•12mo ago
I heard that the scheduler is a huge obstacle to many potential optimizations, is that true?
NAHWheatCracker•12mo ago
In some ways, yes. If you want to optimize at that level you ought to use another language.

I'm not a low level optimization guy, but I've had occasions where I wanted control over which threads my goroutines are running on or prioritizing important goroutines. It's a trade off for making things less complex, which is standard for Go.

I suppose there's always hope that the Go developers can change things.

silisili•12mo ago
You can kinda work around this though. runtime package has a LockOSThread that pins a goroutine to its current thread and prevents others from using it.

If you model it in a way where you have one goroutine per os thread that receives and does work, it gets you close. But in many cases that means rearching the entire code base, as it's not a style I typically reach for.

naikrovek•12mo ago
That sounds a lot like just using another language.
silisili•12mo ago
It's really not that bad. If you have a codebase in Go you can speed up, it's fine.

That said, if you're greenfielding and see this as a limitation to begin with, picking another language is probably the right way.

jerf•11mo ago
If you need it here or there, no. I've got a use case where I need a single locked thread for a particular syscall's functionality. It's not like it leaks out into the rest of the program and everything else has to change to accomodate it.

If you need it pervasively, Go may not be the correct choice. Then again, the list of languages that is not a correct choice in that case is quite long. That's a minority case. An important one, but a minority one.

jasonthorsness•12mo ago
It's always a sign of good design when something as complex as the scheduler described "just works" with the simple abstraction of the goroutine. What a great article.

"1/61 of the time, check the global run queue." Stuff like this is a little odd; I would have thought this would be a variable dependent on the number of physical cores.

01HNNWZ0MV43FF•12mo ago
That's so funny. I just saw `61` in the Tokio code with a comment "copied this from Go"
__turbobrew__•12mo ago
Make sure you set GOMAXPROCS when the runtime is cgroup limited.

I once profiled a slow go program running on a node with 168 cores, but cpu.max was 2 cores for the cgroup. The runtime defaults to set GOMAXPROCS to the number of visible cores which was 168 in this case. Over half the runtime was the scheduler bouncing goroutines between 168 processes despite cpu.max being 2 CPU.

The JRE is smart enough to figure out if it is running in a resource limited cgroup and make sane decisions based upon that, but golang has no such thing.

xyzzy_plugh•12mo ago
Relevant proposal to make GOMAXPROCS cgroup-aware: https://github.com/golang/go/issues/73193
robinhoodexe•12mo ago
Looks like it was just merged btw.
yencabulator•12mo ago
This should be automatic these days (for the basic scenarios).

https://github.com/golang/go/blob/a1a151496503cafa5e4c672e0e...

jasonthorsness•12mo ago
uh isn't that change 3 hours old?
yencabulator•12mo ago
Oh heh yes it is. I just remembered the original discussion from 2019 (https://github.com/golang/go/issues/33803) and grepped the source tree for cgroup to see if that got done or not, but didn't check when it got done.

As said in 2019, import https://github.com/uber-go/automaxprocs to get the functionality ASAP.

jasonthorsness•12mo ago
super-weird coincidence but welcome, I have been waiting for this for a long time!
williamdclt•12mo ago
I honestly can’t count on my fingers and toes how many times something very precisely relevant to me was brought up or sorted out hours-to-days before I looked it up. And more often than once, by people I personally knew!

Always a weird feeling, it’s a small world

formerly_proven•12mo ago
This is probably going to save quadrillions of CPU cycles by making an untold number of deployed Go applications a bit more CPU efficient. Since Go is the "lingua franca" of containers, many ops people assume the Go runtime is container-aware - it's not (well not in any released version, yet).

If they'd now also make the GC respect memory cgroup limits (i.e. automatic GOMEMLIMIT), we'd probably be freeing up a couple petabytes of memory across the globe.

Java has been doing these things for a while, even OpenJDK 8 has had those patches since probably before covid.

mappu•12mo ago
GOMEMLIMIT is not as easy, you may have other processes in the same container/cgroup also using memory.
kunley•12mo ago
As long as I admit respecting cgroup's setting is a good thing, I am not sure it's really quadrillions.

Or is it? Need calculations

formerly_proven•11mo ago
I would've expected it to be either way too much or way too little, but after doing the math it could be sorta in the right ballpark, at least cosmically speaking.

Let's go with three quadrillion (which is apparently 10^15), let's assume a server CPU does 3 GHz (10^9), that's 10^6, a day is about 100k seconds, so ~ten days. But of course we're only saving cycles. I've seen throughput increase by about 50% when setting GOMAXPROCS on bigger machines, but in most of those cases we're looking at containers with fractional cores. On the other hand, there are many containers. So...

kunley•11mo ago
Nice reasoning, thanks.

Hey, but what did you have in mind with regard to bigger machines? I think we're talking here about lowering GOMAXPROCS to have in effect less context switching of the OS threads. While it can bring some good result, a gut feeling is that it'd be hardly 50% faster overall, is your scenario the same then?

01HNNWZ0MV43FF•12mo ago
Trying to see if Rust and Tokio have the same problem. I don't know enough about cgroups to be sure. Tokio at this line [1] ends up delegating to `std::thread::available_parallelism` [2] which says

> It may overcount the amount of parallelism available when limited by a process-wide affinity mask or cgroup quotas and sched_getaffinity() or cgroup fs can’t be queried, e.g. due to sandboxing.

[1] https://docs.rs/tokio/1.45.0/src/tokio/loom/std/mod.rs.html#...

[2] https://doc.rust-lang.org/stable/std/thread/fn.available_par...

nvarsj•12mo ago
Probably not?

The fundamental issue comes down to background GC and CPU quotas in cgroups.

If your number of worker threads is too high, GC will eat up all the quota.

kortex•12mo ago
Fantastic writeup! Visualizations are great, the writeup is thorough but readable.
weiwenhao•12mo ago
Your write-up is so detailed that I even feel like I could implement a complete golang scheduler myself
davidw•11mo ago
I'd be interested in seeing a comparison of this and the BEAM/Erlang/Elixir scheduler by someone paying attention to the details.