frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Maybe the default settings are too high

https://www.raptitude.com/2025/12/maybe-the-default-settings-are-too-high/
591•htk•10h ago•180 comments

Building an AI agent inside a 7-year-old Rails monolith

https://catalinionescu.dev/ai-agent/building-ai-agent-part-1/
32•cionescu1•2h ago•5 comments

Geometric Algorithms for Translucency Sorting in Minecraft [pdf]

https://douira.dev/assets/document/douira-master-thesis.pdf
3•HeliumHydride•8m ago•10 comments

TurboDiffusion: 100–200× Acceleration for Video Diffusion Models

https://github.com/thu-ml/TurboDiffusion
71•meander_water•6h ago•5 comments

MiniMax M2.1: Built for Real-World Complex Tasks, Multi-Language Programming

https://www.minimaxi.com/news/minimax-m21
135•110•8h ago•42 comments

Show HN: GeneGuessr – a daily biology web puzzle

https://geneguessr.brinedew.bio/
32•brinedew•3d ago•7 comments

Show HN: Gaming Couch – a local multiplayer party game platform for 8 players

https://gamingcouch.com
190•ChaosOp•4d ago•43 comments

Ultimate-Linux: Userspace for Linux in Pure JavaScript

https://github.com/popovicu/ultimate-linux
62•radeeyate•7h ago•14 comments

Fahrplan – 39C3

https://fahrplan.events.ccc.de/congress/2025/fahrplan/
239•rurban•15h ago•50 comments

Python 3.15’s interpreter for Windows x86-64 should hopefully be 15% faster

https://fidget-spinner.github.io/posts/no-longer-sorry.html
365•lumpa•20h ago•122 comments

Tiled Art

https://tiled.art/en/home/?id=SilverAndGold
131•meander_water•6d ago•5 comments

How to Reproduce This Book with LaTeX

https://github.com/BenjaminGor/Latex_Notes_Tutorial
10•nill0•6d ago•1 comments

Animating Quines for Larva Labs

https://destroytoday.com/blog/animating-quines-for-larva-labs
12•speckx•3d ago•0 comments

The entire New Yorker archive is now digitized

https://www.newyorker.com/news/press-room/the-entire-new-yorker-archive-is-now-fully-digitized
413•thm•5d ago•54 comments

Tachyon: High frequency statistical sampling profiler

https://docs.python.org/3.15/library/profiling.sampling.html
53•vismit2000•3d ago•1 comments

Lessons from a year of Postgres CDC in production

https://clickhouse.com/blog/postgres-cdc-year-in-review-2025
26•saisrirampur•6d ago•0 comments

CUDA Tile Open Sourced

https://github.com/NVIDIA/cuda-tile
176•JonChesterfield•6d ago•84 comments

Seven Diabetes Patients Die Due to Undisclosed Bug in Abbott's Glucose Monitors

https://sfconservancy.org/blog/2025/dec/23/seven-abbott-freestyle-libre-cgm-patients-dead/
265•pabs3•9h ago•86 comments

Paperbacks and TikTok

https://calnewport.com/on-paperbacks-and-tiktok/
116•zdw•3d ago•70 comments

When a driver challenges the kernel's assumptions

http://miod.online.fr/software/openbsd/stories/udl.html
50•todsacerdoti•9h ago•13 comments

Ask HN: What skills do you want to develop or improve in 2026?

103•meridion•17h ago•146 comments

Archiving Git branches as tags

https://etc.octavore.com/2025/12/archiving-git-branches-as-tags/
110•octavore•3d ago•35 comments

Asahi Linux with Sway on the MacBook Air M2 (2024)

https://daniel.lawrence.lu/blog/2024-12-01-asahi-linux-with-sway-on-the-macbook-air-m2/
234•andsoitis•19h ago•229 comments

Show HN: Coderive – Iterating through 1 Quintillion Inside a Loop in just 50ms

https://github.com/DanexCodr/Coderive
6•DanexCodr•4d ago•4 comments

The Program 2025 annual review: How much money does an audio drama podcast make?

https://programaudioseries.com/the-program-results-7/
70•I-M-S•3d ago•16 comments

I sell onions on the Internet (2019)

https://www.deepsouthventures.com/i-sell-onions-on-the-internet/
442•sogen•17h ago•127 comments

Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat (2024)

https://evan.widloski.com/posts/spinners/
81•Evidlo•1d ago•14 comments

We invited a man into our home at Christmas and he stayed with us for 45 years

https://www.bbc.co.uk/news/articles/cdxwllqz1l0o
1052•rajeshrajappan•23h ago•248 comments

Fabrice Bellard Releases MicroQuickJS

https://github.com/bellard/mquickjs/blob/main/README.md
1453•Aissen•2d ago•546 comments

Google is 'gradually rolling out' option to change your gmail.com address

https://9to5google.com/2025/12/24/google-change-gmail-addresses/
201•geox•12h ago•178 comments
Open in hackernews

The Scalar Select Anti-Pattern

https://matklad.github.io/2025/05/14/scalar-select-aniti-pattern.html
47•goranmoomin•7mo ago

Comments

castratikron•7mo ago
As long as processing one event does not affect any of the other events in the batch. E.g. events are file IO, and processing one event causes another event's descriptor to get closed before that event can be processed.
wahern•7mo ago
If the close routine on an event source, or the low-level (e.g. epoll) registration, deregistration, and dequeueing logic doesn't know how to keep polling and liveness state consistent between userspace and the kernel, they've got much bigger problems. This looks like Rust code so I would hope the event stream libraries are, e.g., keeping Rc'd file objects and properly managing reference integrity viz-a-viz kernel state before the application caller ever sees the first dequeued event in a cycle. This is a perennial issue with event loop libraries and buggy application code (in every language). One can't just deal with raw file descriptors, call the close syscall directly, etc, hoping to keep state consistent implicitly. There's an unavoidable tie-in needed between application's wrappers around low-level resources and the event loop in use.
taeric•7mo ago
I'm not entirely clear on what the proposal is at the end? Seems that the long term answer as to "which of these implications to pursue" is "all of them?" Simply taking in a batch of instructions doesn't immediately change much? You still have to be able to do each of the other things. And you will still expect some dependencies between batches that could possibly interact in the same ways.

In a sense, this is no different than how your processor is dealing with instructions coming in. You will have some instructions that can be run without waiting on previous ones. You will have some that can complete quickly. You will have some that are stalled on other parts of the system. (I'm sure I could keep wording an instruction to match each of the implications.)

To that end, part of your program has to deal with taking off "whats next" and finding how to prepare that to pass to the execution portion of your program. You can make that only take in batches, but you are almost certainly responsible for how you chunk them moreso than whatever process is sending the instructions to you? Even if you are handed clear batches, it is incumbent on you to batch them as they go off to the rest of the system.

lmz•7mo ago
I guess the proposal is "instead of fetching and acting on one event at a time, consider fetching all available events and look for opportunities to optimize which ones you process (e.g. by prioritization or by skipping certain events if superseded by newer ones)".
taeric•7mo ago
I mean, I got that. But you could as easily say "instead of fetching and acting on one event at a time, fetch and triage/route instructions into applicable queues."

In particular, there is no guarantee that moving to batches changes any of the problems you may have from acting on a single one at a time. To that end, you will have to look into all of the other strategies sooner or later.

Following from that, the problem is not "processMessage" or whatever. The problem is that you haven't broken "processMessage" up into the constituent "receive/triage/process/resolve" loop that you almost certainly will have to end up with.

malkia•7mo ago
in CPU's - pipelining!
jchw•7mo ago
I believe something similar is going on internally in Windows with event queues. It coalesces and prioritizes input events when multiple of them pile up before you're able to pop new events off of the queue. (For some events, e.g. pointer events, you can even go and query frames that were coalesced during input handling.) On the application/API end, it just looks like a "scalar select" loop, but actually it is doing batching behavior for input events!

(On the flip side, if you have a Wayland client that falls behind on processing its event queue, it can crash. On the whole this isn't really that bad but if you have something sending a shit load of events it can cause very bad behavior. This has made me wonder if it's possible, with UNIX domain sockets, to implement some kind of event coalescing on the server-side, to avoid flooding the client with high-precision pointer movement events while it's falling behind. Maybe start coalescing when FIONREAD gets to some high watermark? No idea...)