frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Adobe Photoshop 1.0 Source Code (1990)

https://computerhistory.org/blog/adobe-photoshop-source-code/
185•tosh•4d ago•47 comments

Instant database clones with PostgreSQL 18

https://boringsql.com/posts/instant-database-clones/
115•radimm•5h ago•18 comments

Test, Don't (Just) Verify

https://alperenkeles.com/posts/test-dont-verify/
6•alpaylan•33m ago•0 comments

Font with Built-In Syntax Highlighting (2024)

https://blog.glyphdrawing.club/font-with-built-in-syntax-highlighting/
34•california-og•3h ago•6 comments

Carnap – A formal logic framework for Haskell

https://carnap.io/
48•ravenical•4h ago•9 comments

Show HN: CineCLI – Browse and torrent movies directly from your terminal

https://github.com/eyeblech/cinecli
186•samsep10l•8h ago•72 comments

Snitch – A friendlier ss/netstat

https://github.com/karol-broda/snitch
217•karol-broda•12h ago•59 comments

Executorch: On-device AI across mobile, embedded and edge for PyTorch

https://github.com/pytorch/executorch
8•klaussilveira•4d ago•0 comments

It's Always TCP_NODELAY

https://brooker.co.za/blog/2024/05/09/nagle.html
357•eieio•16h ago•120 comments

The Illustrated Transformer

https://jalammar.github.io/illustrated-transformer/
410•auraham•18h ago•76 comments

10 years bootstrapped: €6.5M revenue with a team of 13

https://www.datocms.com/blog/a-look-back-at-2025
72•steffoz•5h ago•18 comments

Ultrasound Cancer Treatment: Sound Waves Fight Tumors

https://spectrum.ieee.org/ultrasound-cancer-treatment
283•rbanffy•17h ago•84 comments

Ask HN: What are the best engineering blogs with real-world depth?

95•nishilpatel•3h ago•49 comments

Claude Code gets native LSP support

https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md
453•JamesSwift•21h ago•255 comments

GLM-4.7: Advancing the Coding Capability

https://z.ai/blog/glm-4.7
365•pretext•18h ago•189 comments

The Polyglot NixOS

https://x86.lol/generic/2025/12/19/polyglot.html
83•todsacerdoti•3d ago•23 comments

NIST was 5 μs off UTC after last week's power cut

https://www.jeffgeerling.com/blog/2025/nist-was-5-μs-utc-after-last-weeks-power-cut
289•jtokoph•20h ago•127 comments

Our New Sam Audio Model Transforms Audio Editing

https://about.fb.com/news/2025/12/our-new-sam-audio-model-transforms-audio-editing/
120•ushakov•6d ago•45 comments

The Duodecimal Bulletin, Vol. 55, No. 1, Year 1209 [pdf]

https://dozenal.org/drupal/sites_bck/default/files/DuodecimalBulletinIssue551.pdf
47•susam•11h ago•12 comments

Debian adds LoongArch as officially supported architecture

https://lists.debian.org/debian-devel-announce/2025/12/msg00004.html
90•cbmuser•3d ago•22 comments

The Garbage Collection Handbook

https://gchandbook.org/index.html
227•andsoitis•18h ago•27 comments

Flock Exposed Its AI-Powered Cameras to the Internet. We Tracked Ourselves

https://www.404media.co/flock-exposed-its-ai-powered-cameras-to-the-internet-we-tracked-ourselves/
643•chaps•20h ago•412 comments

Cecot – 60 Minutes

https://archive.org/details/insidececot
649•lawlessone•12h ago•69 comments

Scaling LLMs to Larger Codebases

https://blog.kierangill.xyz/oversight-and-guidance
269•kierangill•21h ago•100 comments

Remove Black Color with Shaders

https://yuanchuan.dev/remove-black-color-with-shaders
39•surprisetalk•4d ago•12 comments

FCC Updates Covered List to Include Foreign UAS and UAS Critical Components [pdf]

https://docs.fcc.gov/public/attachments/DOC-416839A1.pdf
83•Espressosaurus•9h ago•65 comments

Show HN: Python SDK – forecasting with foundation time-series and tabular models

https://github.com/S-FM/faim-python-client
28•ChernovAndrei•5d ago•8 comments

FPGAs Need a New Future

https://www.allaboutcircuits.com/industry-articles/fpgas-need-a-new-future/
192•thawawaycold•4d ago•127 comments

Solving the Problems of HBM-on-Logic

https://morethanmoore.substack.com/p/solving-the-problems-of-hbm-on-logic
5•zdw•5d ago•0 comments

A centennial look back at Edward Gorey's macabre art and guarded life

https://www.washingtonpost.com/books/2025/12/13/edward-gorey-centennial-gregory-hischak-review/
23•prismatic•6d ago•2 comments
Open in hackernews

Understanding the Go Scheduler

https://nghiant3223.github.io/2025/04/15/go-scheduler.html
180•gnabgib•7mo ago

Comments

90s_dev•7mo ago
I heard that the scheduler is a huge obstacle to many potential optimizations, is that true?
NAHWheatCracker•7mo ago
In some ways, yes. If you want to optimize at that level you ought to use another language.

I'm not a low level optimization guy, but I've had occasions where I wanted control over which threads my goroutines are running on or prioritizing important goroutines. It's a trade off for making things less complex, which is standard for Go.

I suppose there's always hope that the Go developers can change things.

silisili•7mo ago
You can kinda work around this though. runtime package has a LockOSThread that pins a goroutine to its current thread and prevents others from using it.

If you model it in a way where you have one goroutine per os thread that receives and does work, it gets you close. But in many cases that means rearching the entire code base, as it's not a style I typically reach for.

naikrovek•7mo ago
That sounds a lot like just using another language.
silisili•7mo ago
It's really not that bad. If you have a codebase in Go you can speed up, it's fine.

That said, if you're greenfielding and see this as a limitation to begin with, picking another language is probably the right way.

jerf•7mo ago
If you need it here or there, no. I've got a use case where I need a single locked thread for a particular syscall's functionality. It's not like it leaks out into the rest of the program and everything else has to change to accomodate it.

If you need it pervasively, Go may not be the correct choice. Then again, the list of languages that is not a correct choice in that case is quite long. That's a minority case. An important one, but a minority one.

jasonthorsness•7mo ago
It's always a sign of good design when something as complex as the scheduler described "just works" with the simple abstraction of the goroutine. What a great article.

"1/61 of the time, check the global run queue." Stuff like this is a little odd; I would have thought this would be a variable dependent on the number of physical cores.

01HNNWZ0MV43FF•7mo ago
That's so funny. I just saw `61` in the Tokio code with a comment "copied this from Go"
__turbobrew__•7mo ago
Make sure you set GOMAXPROCS when the runtime is cgroup limited.

I once profiled a slow go program running on a node with 168 cores, but cpu.max was 2 cores for the cgroup. The runtime defaults to set GOMAXPROCS to the number of visible cores which was 168 in this case. Over half the runtime was the scheduler bouncing goroutines between 168 processes despite cpu.max being 2 CPU.

The JRE is smart enough to figure out if it is running in a resource limited cgroup and make sane decisions based upon that, but golang has no such thing.

xyzzy_plugh•7mo ago
Relevant proposal to make GOMAXPROCS cgroup-aware: https://github.com/golang/go/issues/73193
robinhoodexe•7mo ago
Looks like it was just merged btw.
yencabulator•7mo ago
This should be automatic these days (for the basic scenarios).

https://github.com/golang/go/blob/a1a151496503cafa5e4c672e0e...

jasonthorsness•7mo ago
uh isn't that change 3 hours old?
yencabulator•7mo ago
Oh heh yes it is. I just remembered the original discussion from 2019 (https://github.com/golang/go/issues/33803) and grepped the source tree for cgroup to see if that got done or not, but didn't check when it got done.

As said in 2019, import https://github.com/uber-go/automaxprocs to get the functionality ASAP.

jasonthorsness•7mo ago
super-weird coincidence but welcome, I have been waiting for this for a long time!
williamdclt•7mo ago
I honestly can’t count on my fingers and toes how many times something very precisely relevant to me was brought up or sorted out hours-to-days before I looked it up. And more often than once, by people I personally knew!

Always a weird feeling, it’s a small world

formerly_proven•7mo ago
This is probably going to save quadrillions of CPU cycles by making an untold number of deployed Go applications a bit more CPU efficient. Since Go is the "lingua franca" of containers, many ops people assume the Go runtime is container-aware - it's not (well not in any released version, yet).

If they'd now also make the GC respect memory cgroup limits (i.e. automatic GOMEMLIMIT), we'd probably be freeing up a couple petabytes of memory across the globe.

Java has been doing these things for a while, even OpenJDK 8 has had those patches since probably before covid.

mappu•7mo ago
GOMEMLIMIT is not as easy, you may have other processes in the same container/cgroup also using memory.
kunley•7mo ago
As long as I admit respecting cgroup's setting is a good thing, I am not sure it's really quadrillions.

Or is it? Need calculations

formerly_proven•7mo ago
I would've expected it to be either way too much or way too little, but after doing the math it could be sorta in the right ballpark, at least cosmically speaking.

Let's go with three quadrillion (which is apparently 10^15), let's assume a server CPU does 3 GHz (10^9), that's 10^6, a day is about 100k seconds, so ~ten days. But of course we're only saving cycles. I've seen throughput increase by about 50% when setting GOMAXPROCS on bigger machines, but in most of those cases we're looking at containers with fractional cores. On the other hand, there are many containers. So...

kunley•7mo ago
Nice reasoning, thanks.

Hey, but what did you have in mind with regard to bigger machines? I think we're talking here about lowering GOMAXPROCS to have in effect less context switching of the OS threads. While it can bring some good result, a gut feeling is that it'd be hardly 50% faster overall, is your scenario the same then?

01HNNWZ0MV43FF•7mo ago
Trying to see if Rust and Tokio have the same problem. I don't know enough about cgroups to be sure. Tokio at this line [1] ends up delegating to `std::thread::available_parallelism` [2] which says

> It may overcount the amount of parallelism available when limited by a process-wide affinity mask or cgroup quotas and sched_getaffinity() or cgroup fs can’t be queried, e.g. due to sandboxing.

[1] https://docs.rs/tokio/1.45.0/src/tokio/loom/std/mod.rs.html#...

[2] https://doc.rust-lang.org/stable/std/thread/fn.available_par...

nvarsj•7mo ago
Probably not?

The fundamental issue comes down to background GC and CPU quotas in cgroups.

If your number of worker threads is too high, GC will eat up all the quota.

kortex•7mo ago
Fantastic writeup! Visualizations are great, the writeup is thorough but readable.
weiwenhao•7mo ago
Your write-up is so detailed that I even feel like I could implement a complete golang scheduler myself
davidw•7mo ago
I'd be interested in seeing a comparison of this and the BEAM/Erlang/Elixir scheduler by someone paying attention to the details.