frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Voyager 1 runs on 69 KB of memory and an 8-track tape recorder

https://techfixated.com/a-1977-time-capsule-voyager-1-runs-on-69-kb-of-memory-and-an-8-track-tape...
76•speckx•1h ago•33 comments

The RISE RISC-V Runners: free, native RISC-V CI on GitHub

https://riseproject.dev/2026/03/24/announcing-the-rise-risc-v-runners-free-native-risc-v-ci-on-gi...
33•thebeardisred•3d ago•5 comments

AyaFlow: A high-performance, eBPF-based network traffic analyzer written in Rust

https://github.com/DavidHavoc/ayaFlow
26•tanelpoder•2h ago•1 comments

Pretext: TypeScript library for multiline text measurement and layout

https://github.com/chenglou/pretext
22•emersonmacro•1d ago•0 comments

Nitrile and latex gloves may cause overestimation of microplastics

https://news.umich.edu/nitrile-and-latex-gloves-may-cause-overestimation-of-microplastics-u-m-stu...
402•giuliomagnifico•8h ago•173 comments

Police used AI facial recognition to wrongly arrest TN woman for crimes in ND

https://www.cnn.com/2026/03/29/us/angela-lipps-ai-facial-recognition
154•ourmandave•3h ago•62 comments

Show HN: QuickBEAM – run JavaScript as supervised Erlang/OTP processes

https://github.com/elixir-volt/quickbeam
7•dannote•20h ago•1 comments

VR Is Not Dead

https://yadin.com/notes/vr-abides/
7•dryadin•4d ago•4 comments

Founder of GitLab battles cancer by founding companies

https://sytse.com/cancer/
1292•bob_theslob646•1d ago•247 comments

Miasma: A tool to trap AI web scrapers in an endless poison pit

https://github.com/austin-weeks/miasma
203•LucidLynx•7h ago•157 comments

The rise and fall of IBM's 4 Pi aerospace computers: an illustrated history

https://www.righto.com/2026/03/ibm-4-pi-computer-history.html
17•zdw•1h ago•7 comments

LinkedIn uses 2.4 GB RAM across two tabs

329•hrncode•8h ago•220 comments

A nearly perfect USB cable tester

https://blog.literarily-starved.com/2026/02/technology-the-nearly-perfect-usb-cable-tester-does-e...
208•birdculture•3d ago•102 comments

The bot situation on the internet is worse than you could imagine

https://gladeart.com/blog/the-bot-situation-on-the-internet-is-actually-worse-than-you-could-imag...
114•ohjeez•1h ago•76 comments

Full network of clitoral nerves mapped out for first time

https://www.theguardian.com/society/2026/mar/29/full-network-clitoral-nerves-mapped-out-first-tim...
57•onei•2h ago•16 comments

Show HN: Sheet Ninja – Google Sheets as a CRUD Back End for Vibe Coders

https://sheetninja.io
51•sxa001•6h ago•53 comments

Show HN: Create a full language server in Go with 3.17 spec support

https://github.com/owenrumney/go-lsp
60•rumno0•4d ago•11 comments

Netscape News Feed Straight Out of the Late 00s

https://isp.netscape.com/
11•mistyvales•23m ago•1 comments

AI overly affirms users asking for personal advice

https://news.stanford.edu/stories/2026/03/ai-advice-sycophantic-models-research
742•oldfrenchfries•1d ago•581 comments

I turned my Kindle into my own personal newspaper

https://manualdousuario.net/en/how-to-kindle-personal-newspaper/
135•rpgbr•2d ago•50 comments

CSS is DOOMed

https://nielsleenheer.com/articles/2026/css-is-doomed-rendering-doom-in-3d-with-css/
451•msephton•21h ago•106 comments

The Failure of the Thermodynamics of Computation (2010)

https://sites.pitt.edu/~jdnorton/Goodies/Idealization/index.html
33•nill0•2d ago•2 comments

Twice this week, I have come across embarassingly bad data

https://successfulsoftware.net/2026/03/29/stop-publishing-garbage-data-its-embarrassing/
51•hermitcrab•2h ago•42 comments

Scientific audio equipment analysis with analyzer shows no difference in quality

https://www.tomshardware.com/pc-components/sound-cards/comparison-of-usd4-000-boutique-audio-cabl...
29•nick__m•1h ago•50 comments

Show HN: BreezePDF – Free, in-browser PDF editor

https://breezepdf.com/?v=3
11•philjohnson•4h ago•7 comments

First Western Digital, now Sony: The tech giant suspends SD card sales

https://mashable.com/article/sony-sd-card-sales-suspended-memory-shortage
9•_tk_•43m ago•4 comments

Alzheimer's disease mortality among taxi and ambulance drivers (2024)

https://www.bmj.com/content/387/bmj-2024-082194
192•bookofjoe•17h ago•128 comments

Siclair Microvision (1977)

https://r-type.org/articles/art-452.htm
44•joebig•3d ago•16 comments

TSA lines are so out of control that travelers are hiring line-sitters

https://www.washingtonpost.com/travel/2026/03/28/tsa-line-sitters/
64•bookofjoe•4h ago•90 comments

When Do We Become Adults, Really?

https://www.newyorker.com/culture/annals-of-inquiry/when-do-we-become-adults-really
30•benbreen•3d ago•55 comments
Open in hackernews

Building Burstables: CPU slicing with cgroups

https://www.ubicloud.com/blog/building-burstables-cpu-slicing-with-cgroups
130•msarnowicz•11mo ago

Comments

msarnowicz•11mo ago
Hey, author here. Please AMA.

I came into the Linux world via Postgres, and this was an interesting project for me learning more about Linux internals. While cgroups v2 do offer basic support for CPU bursting, the bursts are short-lived, and credits don’t persist beyond sub-second intervals. If you’ve run into scenarios where more adaptive or sustained bursting would help, we’d love to hear about them. Knowing your use cases will help shape what we build next.

parrit•11mo ago
Thanks! That was a pleasant read. I have been wanting to mess with cgroups for a while, in order to hack together a "docker" like many have done before to understand it better. This will help!

Are there typical use cases where you reach for cgroups directly instead of using the container abstraction?

msarnowicz•11mo ago
Thanks for the kind words. Even if you are not building a cloud service, I think it is good to understand how the underlying layer works and what are the knobs and the limits of the platform. I could see a use case where two or more processes need to run on one VM or a container, maybe for cost-saving reasons or specific architecture/security reasons, but need to be guaranteed a certain amount of resources and a certain isolation from each other.
motrm•11mo ago
Echoing parrit's comment, this was indeed a very nice read and very well written.

I particularly enjoyed the gentle exposition into the world of cgroups and how they work, the levers available, and finally how Ubicloud uses them.

Looking forward to reading how you handle burst credits over longer periods, once you implement that :)

Lovely work, Maciek!

msarnowicz•11mo ago
Thank you very much, I appreciate your comment.
nighthawk454•11mo ago
Great article, thanks! I’ve been curious if there’s any scheduling optimizations for workloads that are extremely burst-y. Such as super low traffic websites or cron job type work - where you may want your database ‘provisioned’ all the time, but realistically it won’t get anywhere near even the 50% cpu minimum at any kind of sustained rate. Presumably those could be hosted at even a fraction of the burst cost. Is that a use case Ubicloud has considered?
msarnowicz•11mo ago
This is a very valid scenario, however, one that is not yet fully baked into this implementation. But, as mentioned, this is a starting point. We want to hear feedback and see customers' workloads on Burstables first.

The main challenge here is that cpu.max.burst can be set no higher than the limit set in cpu.max. This limits our options to some extent. But we can still look at some possible implementation choices here: - Pack more VMs into the same slice/group, and with that lower the minimum CPU guaranteed, and at the same time lower the price point. This would increase the chance of running into a "noisy neighbor", but we expect it would not be used for any critical workload. - Implement calculation of CPU credits outside of the kernel and change the CPU max and burst limits dynamically over an extended period of time (hours and days, instead of sub-second).

nighthawk454•11mo ago
Gotcha, thanks for the reply. Makes sense to target burstables first - that seems to be the most common feature set. That’s interesting that it’s not readily available in the kernel. I once spoke to some AWS folks dealing with Batch/ECS scheduling of docker container tasks and they hit similar limitations. As a result their CPU max/burst settings work like the underlying cgroups too.

I imagine writing a custom scheduler would be quite an undertaking!

msarnowicz•11mo ago
I think so, too!
phrotoma•11mo ago
I don't have a question but I really wanted to say thanks for the blog post. Extremely clear and cogent writing on a tricky topic. Well done!
jauntywundrkind•11mo ago
I'd also strongly recommend this view of how Kubernetes uses cgroups, showing similar drill downs for how everything gets managed. Lovely view of what's really happening! https://martinheinz.dev/blog/91

I've been a bit apoplectic in the past that cgroups seemed not super helpful in Kubernetes, but this really showed me how the different Kubernetes QoS levels are driven by similar juggling of different cgroups.

I'm not sure if this makes use of cpu.max.burst or not. There's a fun article that monkeys with these cgroups directly, which is neat to see. It also links to an ask that Kubernetes get support for the new (5.14) CFS Burst system. Which is a whole nother fun rabbit hole of fair share bursting to go down! https://medium.com/@christian.cadieux/kubernetes-throttling-... https://github.com/kubernetes/kubernetes/issues/104516

msarnowicz•11mo ago
Thank you, that is a good perspective, too!
__turbobrew__•11mo ago
cpu.max.burst increases the chances of noisy neighbours stealing CPU from other tenants.

I run multi-tenant k8s clusters with hundreds of tenants and it fundamentally is a hard problem to balance workload performance with efficiency. Sharing resources increases efficiency but in most cases increases tail latencies.

jeffbee•11mo ago
If you use k8s qos levels "guaranteed" cpu resources will be distinct — via cpu sets — from the ones used by the riff-raff. This is a good way to segregate latency-sensitive apps where you care about latency from throughtput-oriented stuff where you don't.
__turbobrew__•11mo ago
Guaranteed QoS isn’t perfect:

1. Neighbours can be noisy to the other hyperthread on the same CPU. For example, heavy usage of avx-512 and other vectorized instructions can affect a tenant running on the same core but different hyperthread. You can disable hyperthreading, but now you are making the same tradeoff where you are sacrificing efficiency for low tail latencies.

2. There are certain locks in the kernel which can be exhausted by certain behaviour of a single tenant. For example, on kernel 5.15 there was one global kernel lock for cgroup resource accounting. If you have a tenant which is constantly hitting cgroup limits it increases lock contention in the kernel which slows down other tenants on the system which also use the same locks. This particular issue with cgroups accounting has been improved in later kernels.

3. If your latency sensitive service runs on the same cores which service IRQs, the tail latency can greatly increase when there are heavy IRQ load, for example high speed NIC IRQs. You can isolate those CPUs from the pool of CPUs offered to pods, but now you are dedicating 4-8 CPUs to just process interrupts. Ideally you could run the non-guaranteed pods on the CPUs which service IRQs, but that is not supported by kubernetes.

4. During full node memory pressure, the kernel does not respect memory.min and will reclaim pages of guaranteed QoS workloads.

5. The current implementation of memory QoS does not adjust memory.max of the burstable pod slice, so bursable pods can take up the entire free memory of the kubepods slice which starves new memory allocations from guaranteed pods.

Dont even get me started on NUMA issues.

jeffbee•11mo ago
There isn't any way on Linux to deal with processes that create dirty pages. It is folly to try. The only way to deal is to put I/O stuff on a whole box/node by itself, and outlaw block I/O on all other nodes.
hinkley•11mo ago
I suspect you can only really count on neighbors to take care of their own. Anything else they see will be taken as an entitlement.

So for instance if you run three processes for the same customer, can you set them to use the same cpu slices and deal with one of their apps occasionally needing a burst of CPU?

__turbobrew__•11mo ago
Sure in theory you could do that, but kubernetes does not support overriding the top level cgroup a pod is assigned to.
immibis•11mo ago
Can't find the article where I first read it (something like "Queuing theory for software engineers") but average latency increases as, IIRC, serving time ÷ (1 - utilization). Get half as close to 100% utilization, and you double your average latency. A system with 87.5% utilization has double the latency as at 75%. At 100% it's infinity (averaged over infinite time - on shorter timescales it's an unpredictable scale-free random walk).

This is fundamental - the closer utilization is to 100%, the higher the chance a newly arriving work item has to wait for one that's already running, and several already in the queue. What's astonishing is how steep that curve is. At 95% utilization the average queue length is 20 tasks. At 99% it's 100 tasks. At 99.9% it's 1000 asks. If you find yourself at 98% utilization, you should not think "nice - in fully utilizing the server I paid for" - you should buy another server and lower it to 49%. (Or optimize the code more)

One way to deal with this is to have separate low-latency and high-latency queues. You can then run low latency tasks at say 50% utilization and fill up idle time with high latency tasks. Presuming and you actually want the HL tasks to ever get done, you can't guarantee 100% utilization, but you can get arbitrarily close as long as there's high-latency work to do. I have no idea whether this is something Kubernetes can do. You can of course have more than two priority levels.

This applies everywhere there's a queue, which is basically everywhere there's s contended resource. Hyperscalers know this. It's even been theorized that S3 Glacier is just the super low priority disk access queue on regular AWS servers (but Amazon won't tell us).

remram•11mo ago
Maybe one of these? https://dzone.com/articles/queuing-theory-for-software-engin... https://medium.com/@quebostina/stack-and-queue-are-two-of-th...
msarnowicz•11mo ago
Reading through the description of how cgroups are used in Kubernetes, I can see some similarities and some differences as well. It is interesting to compare the approaches.

We chose not to use cpu.weight, and instead divide the host explicitly using cgroups (slice in systemd). We put Standard VMs in dedicated slices to keep them isolated and let several Burstable VMs share a slice. This provides a trade off between the price of the VM and resource guarantees.

We use cpu.max.burst to allow the VMs to "expand" a bit, while we understand that this creates a "noisy neighbor" problem. At the same time there is a minimum guarantee of the CPU. The cgroups allow for all those knobs and give a lot of control. Combining them in various ways is an interesting puzzle.

solarkraft•11mo ago
My main takeaway from this is that you can control KVM VMs with cgroups just like normal processes. I didn’t expect that.
msarnowicz•11mo ago
I am glad you found this useful!