frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

America vs. Singapore: You Can't Save Your Way Out of Economic Shocks

https://www.governance.fyi/p/america-vs-singapore-you-cant-save
57•guardianbob•1h ago•23 comments

Pebble Production: February Update

https://repebble.com/blog/february-pebble-production-and-software-updates
135•smig0•3h ago•45 comments

Paged Out Issue #8 [pdf]

https://pagedout.institute/download/PagedOut_008.pdf
113•SteveHawk27•3h ago•18 comments

Don't Trust the Salt: AI Summarization, Multilingual Safety, and LLM Guardrails

https://royapakzad.substack.com/p/multilingual-llm-evaluation-to-guardrails
132•benbreen•2d ago•46 comments

-fbounds-safety: Enforcing bounds safety for C

https://clang.llvm.org/docs/BoundsSafety.html
54•thefilmore•3d ago•39 comments

Bridging Elixir and Python with Oban

https://oban.pro/articles/bridging-with-oban
71•sorentwo•5h ago•22 comments

Coding Tricks Used in the C64 Game Seawolves

https://kodiak64.co.uk/blog/seawolves-technical-tricks
44•atan2•3h ago•4 comments

Show HN: A physically-based GPU ray tracer written in Julia

https://makie.org/website/blogposts/raytracing/
76•simondanisch•5h ago•32 comments

Against Theory-Motivated Experimentation

https://journals.sagepub.com/doi/10.1177/26339137261421577
13•paraschopra•1h ago•3 comments

Sizing chaos

https://pudding.cool/2026/02/womens-sizing/
724•zdw•18h ago•379 comments

Large Language Models for Mortals: A Practical Guide for Analysts with Python

https://crimede-coder.com/blogposts/2026/LLMsForMortals
18•apwheele•4d ago•2 comments

The Mongol Khans of Medieval France

https://www.historytoday.com/archive/feature/mongol-khans-medieval-france
70•Thevet•2d ago•19 comments

Famous Signatures Through History

https://signatory.app/#famous-signatures
21•elliotbnvl•2h ago•22 comments

Show HN: Mini-Diarium - An encrypted, local, cross-platform journaling app

https://github.com/fjrevoredo/mini-diarium
71•holyknight•4h ago•41 comments

Measuring AI agent autonomy in practice

https://www.anthropic.com/research/measuring-agent-autonomy
9•jbredeche•1h ago•2 comments

27-year-old Apple iBooks can connect to Wi-Fi and download official updates

https://old.reddit.com/r/MacOS/comments/1r8900z/macos_which_officially_supports_27_year_old/
411•surprisetalk•19h ago•230 comments

Voith Schneider Propeller

https://en.wikipedia.org/wiki/Voith_Schneider_Propeller
63•Luc•3d ago•15 comments

ShannonMax: A Library to Optimize Emacs Keybindings with Information Theory

https://github.com/sstraust/shannonmax
31•sammy0910•4h ago•5 comments

15 years of FP64 segmentation, and why the Blackwell Ultra breaks the pattern

https://nicolasdickenmann.com/blog/the-great-fp64-divide.html
163•fp64enjoyer•14h ago•59 comments

Old School Visual Effects: The Cloud Tank (2010)

http://singlemindedmovieblog.blogspot.com/2010/04/old-school-effects-cloud-tank.html
62•exvi•9h ago•8 comments

Cosmologically Unique IDs

https://jasonfantl.com/posts/Universal-Unique-IDs/
440•jfantl•21h ago•139 comments

Step 3.5 Flash – Open-source foundation model, supports deep reasoning at speed

https://static.stepfun.com/blog/step-3.5-flash/
159•kristianp•13h ago•66 comments

Zero downtime migrations at Petabyte scale

https://planetscale.com/blog/zero-downtime-migrations-at-petabyte-scale
6•Ozzie_osman•2d ago•0 comments

Tailscale Peer Relays is now generally available

https://tailscale.com/blog/peer-relays-ga
446•sz4kerto•23h ago•219 comments

Anthropic officially bans using subscription auth for third party use

https://code.claude.com/docs/en/legal-and-compliance
532•theahura•13h ago•646 comments

A word processor from 1990s for Atari ST/TOS is still supported by enthusiasts

https://tempus-word.de/en/index
88•muzzy19•2d ago•41 comments

Zero-day CSS: CVE-2026-2441 exists in the wild

https://chromereleases.googleblog.com/2026/02/stable-channel-update-for-desktop_13.html
363•idoxer•23h ago•206 comments

Gemini 3.1 Pro Preview

https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemini-3.1-pro-preview?...
9•MallocVoidstar•49m ago•1 comments

DOGE Track

https://dogetrack.info/
222•donohoe•3h ago•107 comments

How to choose between Hindley-Milner and bidirectional typing

https://thunderseethe.dev/posts/how-to-choose-between-hm-and-bidir/
122•thunderseethe•3d ago•43 comments
Open in hackernews

The Scalar Select Anti-Pattern

https://matklad.github.io/2025/05/14/scalar-select-aniti-pattern.html
47•goranmoomin•9mo ago

Comments

castratikron•9mo ago
As long as processing one event does not affect any of the other events in the batch. E.g. events are file IO, and processing one event causes another event's descriptor to get closed before that event can be processed.
wahern•9mo ago
If the close routine on an event source, or the low-level (e.g. epoll) registration, deregistration, and dequeueing logic doesn't know how to keep polling and liveness state consistent between userspace and the kernel, they've got much bigger problems. This looks like Rust code so I would hope the event stream libraries are, e.g., keeping Rc'd file objects and properly managing reference integrity viz-a-viz kernel state before the application caller ever sees the first dequeued event in a cycle. This is a perennial issue with event loop libraries and buggy application code (in every language). One can't just deal with raw file descriptors, call the close syscall directly, etc, hoping to keep state consistent implicitly. There's an unavoidable tie-in needed between application's wrappers around low-level resources and the event loop in use.
taeric•9mo ago
I'm not entirely clear on what the proposal is at the end? Seems that the long term answer as to "which of these implications to pursue" is "all of them?" Simply taking in a batch of instructions doesn't immediately change much? You still have to be able to do each of the other things. And you will still expect some dependencies between batches that could possibly interact in the same ways.

In a sense, this is no different than how your processor is dealing with instructions coming in. You will have some instructions that can be run without waiting on previous ones. You will have some that can complete quickly. You will have some that are stalled on other parts of the system. (I'm sure I could keep wording an instruction to match each of the implications.)

To that end, part of your program has to deal with taking off "whats next" and finding how to prepare that to pass to the execution portion of your program. You can make that only take in batches, but you are almost certainly responsible for how you chunk them moreso than whatever process is sending the instructions to you? Even if you are handed clear batches, it is incumbent on you to batch them as they go off to the rest of the system.

lmz•9mo ago
I guess the proposal is "instead of fetching and acting on one event at a time, consider fetching all available events and look for opportunities to optimize which ones you process (e.g. by prioritization or by skipping certain events if superseded by newer ones)".
taeric•9mo ago
I mean, I got that. But you could as easily say "instead of fetching and acting on one event at a time, fetch and triage/route instructions into applicable queues."

In particular, there is no guarantee that moving to batches changes any of the problems you may have from acting on a single one at a time. To that end, you will have to look into all of the other strategies sooner or later.

Following from that, the problem is not "processMessage" or whatever. The problem is that you haven't broken "processMessage" up into the constituent "receive/triage/process/resolve" loop that you almost certainly will have to end up with.

malkia•9mo ago
in CPU's - pipelining!
jchw•9mo ago
I believe something similar is going on internally in Windows with event queues. It coalesces and prioritizes input events when multiple of them pile up before you're able to pop new events off of the queue. (For some events, e.g. pointer events, you can even go and query frames that were coalesced during input handling.) On the application/API end, it just looks like a "scalar select" loop, but actually it is doing batching behavior for input events!

(On the flip side, if you have a Wayland client that falls behind on processing its event queue, it can crash. On the whole this isn't really that bad but if you have something sending a shit load of events it can cause very bad behavior. This has made me wonder if it's possible, with UNIX domain sockets, to implement some kind of event coalescing on the server-side, to avoid flooding the client with high-precision pointer movement events while it's falling behind. Maybe start coalescing when FIONREAD gets to some high watermark? No idea...)