frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCode – Open source AI coding agent

https://opencode.ai/
698•rbanffy•10h ago•305 comments

Mamba-3

https://www.together.ai/blog/mamba-3
79•matt_d•3d ago•8 comments

France's aircraft carrier located in real time by Le Monde through fitness app

https://www.lemonde.fr/en/international/article/2026/03/20/stravaleaks-france-s-aircraft-carrier-...
552•MrDresden•18h ago•441 comments

Ubuntu 26.04 Ends 46 Years of Silent sudo Passwords

https://pbxscience.com/ubuntu-26-04-ends-46-years-of-silent-sudo-passwords/
60•akersten•2h ago•55 comments

Fujifilm X RAW STUDIO webapp clone

https://github.com/eggricesoy/filmkit
18•notcodingtoday•2d ago•10 comments

Molly Guard

https://bookofjoe2.blogspot.com/2026/02/molly-guard.html
71•surprisetalk•17h ago•32 comments

FFmpeg 101 (2024)

https://blogs.igalia.com/llepage/ffmpeg-101/
43•vinhnx•5h ago•0 comments

We rewrote our Rust WASM parser in TypeScript and it got faster

https://www.openui.com/blog/rust-wasm-parser
178•zahlekhan•10h ago•100 comments

A Japanese glossary of chopsticks faux pas

https://www.nippon.com/en/japan-data/h01362/
227•cainxinth•11h ago•163 comments

Ghostling

https://github.com/ghostty-org/ghostling
182•bjornroberg•9h ago•33 comments

Linux Applications Programming by Example: The Fundamental APIs (2nd Edition)

https://github.com/arnoldrobbins/LinuxByExample-2e
74•teleforce•8h ago•8 comments

A look at content scrambling in DVDs

https://mathweb.ucsd.edu/~crypto/Projects/MarkBarry/
34•rvnx•2d ago•12 comments

Attention Residuals

https://github.com/MoonshotAI/Attention-Residuals
165•GaggiX•13h ago•22 comments

Padel Chess – tactical simulator for padel

https://www.padelchess.me/
7•AlexGerasim•3d ago•0 comments

The Los Angeles Aqueduct Is Wild

https://practical.engineering/blog/2026/3/17/the-los-angeles-aqueduct-is-wild
336•michaefe•3d ago•173 comments

Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran

https://github.com/FormerLab/fortransky
73•FormerLabFred•9h ago•37 comments

Turing Award Honors Bennett and Brassard for Quantum Information Science

https://amturing.acm.org
25•throw0101d•2d ago•0 comments

The Ugliest Airplane: An Appreciation

https://www.smithsonianmag.com/air-space-magazine/ugliest-airplane-appreciation-180978708/
58•randycupertino•2d ago•30 comments

Man pleads guilty to $8M AI-generated music scheme

https://therecord.media/man-pleads-guilty-8-million-ai-music-scheme
9•nstj•24m ago•2 comments

Lent and Lisp

https://leancrew.com/all-this/2026/02/lent-and-lisp/
51•surprisetalk•2d ago•2 comments

The worst volume control UI in the world (2017)

https://uxdesign.cc/the-worst-volume-control-ui-in-the-world-60713dc86950
108•andsoitis•2d ago•54 comments

VisiCalc Reconstructed

https://zserge.com/posts/visicalc/
190•ingve•3d ago•74 comments

Our commitment to Windows quality

https://blogs.windows.com/windows-insider/2026/03/20/our-commitment-to-windows-quality/
502•hadrien01•12h ago•902 comments

purl: a curl-esque CLI for making HTTP requests that require payment

https://www.purl.dev/
20•bpierre•5h ago•3 comments

Italy, Belgium set to lose gas supply after biggest LNG plant bombed

https://www.politico.eu/article/italy-belgium-lose-gas-supply-world-biggest-lng-plant-bombed/
12•leonidasrup•1h ago•0 comments

Show HN: Red Grid Link – peer-to-peer team tracking over Bluetooth, no servers

https://github.com/RedGridTactical/RedGridLink
39•redgridtactical•9h ago•14 comments

Entso-E final report on Iberian 2025 blackout

https://www.entsoe.eu/publications/blackout/28-april-2025-iberian-blackout/
183•Rygian•20h ago•90 comments

ArXiv declares independence from Cornell

https://www.science.org/content/article/arxiv-pioneering-preprint-server-declares-independence-co...
747•bookstore-romeo•1d ago•263 comments

Delve – Fake Compliance as a Service

https://deepdelver.substack.com/p/delve-fake-compliance-as-a-service
645•freddykruger•1d ago•214 comments

Parallel Perl – Autoparallelizing interpreter with JIT

https://perl.petamem.com/gpw2026/perl-mit-ai-gpw2026.html#/4/1/1
117•bmn__•2d ago•41 comments
Open in hackernews

Understanding the Go Scheduler

https://nghiant3223.github.io/2025/04/15/go-scheduler.html
180•gnabgib•10mo ago

Comments

90s_dev•10mo ago
I heard that the scheduler is a huge obstacle to many potential optimizations, is that true?
NAHWheatCracker•10mo ago
In some ways, yes. If you want to optimize at that level you ought to use another language.

I'm not a low level optimization guy, but I've had occasions where I wanted control over which threads my goroutines are running on or prioritizing important goroutines. It's a trade off for making things less complex, which is standard for Go.

I suppose there's always hope that the Go developers can change things.

silisili•10mo ago
You can kinda work around this though. runtime package has a LockOSThread that pins a goroutine to its current thread and prevents others from using it.

If you model it in a way where you have one goroutine per os thread that receives and does work, it gets you close. But in many cases that means rearching the entire code base, as it's not a style I typically reach for.

naikrovek•10mo ago
That sounds a lot like just using another language.
silisili•10mo ago
It's really not that bad. If you have a codebase in Go you can speed up, it's fine.

That said, if you're greenfielding and see this as a limitation to begin with, picking another language is probably the right way.

jerf•10mo ago
If you need it here or there, no. I've got a use case where I need a single locked thread for a particular syscall's functionality. It's not like it leaks out into the rest of the program and everything else has to change to accomodate it.

If you need it pervasively, Go may not be the correct choice. Then again, the list of languages that is not a correct choice in that case is quite long. That's a minority case. An important one, but a minority one.

jasonthorsness•10mo ago
It's always a sign of good design when something as complex as the scheduler described "just works" with the simple abstraction of the goroutine. What a great article.

"1/61 of the time, check the global run queue." Stuff like this is a little odd; I would have thought this would be a variable dependent on the number of physical cores.

01HNNWZ0MV43FF•10mo ago
That's so funny. I just saw `61` in the Tokio code with a comment "copied this from Go"
__turbobrew__•10mo ago
Make sure you set GOMAXPROCS when the runtime is cgroup limited.

I once profiled a slow go program running on a node with 168 cores, but cpu.max was 2 cores for the cgroup. The runtime defaults to set GOMAXPROCS to the number of visible cores which was 168 in this case. Over half the runtime was the scheduler bouncing goroutines between 168 processes despite cpu.max being 2 CPU.

The JRE is smart enough to figure out if it is running in a resource limited cgroup and make sane decisions based upon that, but golang has no such thing.

xyzzy_plugh•10mo ago
Relevant proposal to make GOMAXPROCS cgroup-aware: https://github.com/golang/go/issues/73193
robinhoodexe•10mo ago
Looks like it was just merged btw.
yencabulator•10mo ago
This should be automatic these days (for the basic scenarios).

https://github.com/golang/go/blob/a1a151496503cafa5e4c672e0e...

jasonthorsness•10mo ago
uh isn't that change 3 hours old?
yencabulator•10mo ago
Oh heh yes it is. I just remembered the original discussion from 2019 (https://github.com/golang/go/issues/33803) and grepped the source tree for cgroup to see if that got done or not, but didn't check when it got done.

As said in 2019, import https://github.com/uber-go/automaxprocs to get the functionality ASAP.

jasonthorsness•10mo ago
super-weird coincidence but welcome, I have been waiting for this for a long time!
williamdclt•10mo ago
I honestly can’t count on my fingers and toes how many times something very precisely relevant to me was brought up or sorted out hours-to-days before I looked it up. And more often than once, by people I personally knew!

Always a weird feeling, it’s a small world

formerly_proven•10mo ago
This is probably going to save quadrillions of CPU cycles by making an untold number of deployed Go applications a bit more CPU efficient. Since Go is the "lingua franca" of containers, many ops people assume the Go runtime is container-aware - it's not (well not in any released version, yet).

If they'd now also make the GC respect memory cgroup limits (i.e. automatic GOMEMLIMIT), we'd probably be freeing up a couple petabytes of memory across the globe.

Java has been doing these things for a while, even OpenJDK 8 has had those patches since probably before covid.

mappu•10mo ago
GOMEMLIMIT is not as easy, you may have other processes in the same container/cgroup also using memory.
kunley•10mo ago
As long as I admit respecting cgroup's setting is a good thing, I am not sure it's really quadrillions.

Or is it? Need calculations

formerly_proven•10mo ago
I would've expected it to be either way too much or way too little, but after doing the math it could be sorta in the right ballpark, at least cosmically speaking.

Let's go with three quadrillion (which is apparently 10^15), let's assume a server CPU does 3 GHz (10^9), that's 10^6, a day is about 100k seconds, so ~ten days. But of course we're only saving cycles. I've seen throughput increase by about 50% when setting GOMAXPROCS on bigger machines, but in most of those cases we're looking at containers with fractional cores. On the other hand, there are many containers. So...

kunley•10mo ago
Nice reasoning, thanks.

Hey, but what did you have in mind with regard to bigger machines? I think we're talking here about lowering GOMAXPROCS to have in effect less context switching of the OS threads. While it can bring some good result, a gut feeling is that it'd be hardly 50% faster overall, is your scenario the same then?

01HNNWZ0MV43FF•10mo ago
Trying to see if Rust and Tokio have the same problem. I don't know enough about cgroups to be sure. Tokio at this line [1] ends up delegating to `std::thread::available_parallelism` [2] which says

> It may overcount the amount of parallelism available when limited by a process-wide affinity mask or cgroup quotas and sched_getaffinity() or cgroup fs can’t be queried, e.g. due to sandboxing.

[1] https://docs.rs/tokio/1.45.0/src/tokio/loom/std/mod.rs.html#...

[2] https://doc.rust-lang.org/stable/std/thread/fn.available_par...

nvarsj•10mo ago
Probably not?

The fundamental issue comes down to background GC and CPU quotas in cgroups.

If your number of worker threads is too high, GC will eat up all the quota.

kortex•10mo ago
Fantastic writeup! Visualizations are great, the writeup is thorough but readable.
weiwenhao•10mo ago
Your write-up is so detailed that I even feel like I could implement a complete golang scheduler myself
davidw•10mo ago
I'd be interested in seeing a comparison of this and the BEAM/Erlang/Elixir scheduler by someone paying attention to the details.