frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
80•klaussilveira•51m ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
580•xnx•6h ago•370 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
164•vecti•3h ago•69 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
27•isitcontent•1h ago•5 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
251•aktau•7h ago•130 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
54•coloneltcb•2d ago•27 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
240•ostacke•7h ago•58 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
6•phreda4•32m ago•0 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
53•vmatsiiako•5h ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
95•limoce•3d ago•42 comments

Tell HN: I'm a PM at a big system of record SaaS. We're cooked

23•throwawaypm123•1h ago•0 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
105•i5heu•3h ago•80 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
7•dmpetrov•1h ago•3 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
189•surprisetalk•3d ago•24 comments

Early Christian Writings

https://earlychristianwritings.com/
23•dsego•41m ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
214•lstoll•7h ago•163 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
5•betamark•8h ago•0 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
875•cdrnsf•10h ago•380 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
62•antves•1d ago•49 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
83•eljojo•3h ago•80 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
7•lebovic•1d ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
6•nwparker•1d ago•3 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
10•JoshPurtell•21h ago•3 comments

The mystery of the mole playing rough (2019) [video]

https://www.youtube.com/watch?v=nwQmwT1ULMU
4•archagon•15h ago•0 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
5•NathanFlurry•9h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
4•rescrv•8h ago•1 comments

Masked namespace vulnerability in Temporal

https://depthfirst.com/post/the-masked-namespace-vulnerability-in-temporal-cve-2025-14986
22•bmit•2h ago•2 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
317•todsacerdoti•8h ago•177 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
20•dchu17•5h ago•6 comments

The Beauty of Slag

https://mag.uchicago.edu/science-medicine/beauty-slag
4•sohkamyung•3d ago•0 comments
Open in hackernews

The Scalar Select Anti-Pattern

https://matklad.github.io/2025/05/14/scalar-select-aniti-pattern.html
47•goranmoomin•8mo ago

Comments

castratikron•8mo ago
As long as processing one event does not affect any of the other events in the batch. E.g. events are file IO, and processing one event causes another event's descriptor to get closed before that event can be processed.
wahern•8mo ago
If the close routine on an event source, or the low-level (e.g. epoll) registration, deregistration, and dequeueing logic doesn't know how to keep polling and liveness state consistent between userspace and the kernel, they've got much bigger problems. This looks like Rust code so I would hope the event stream libraries are, e.g., keeping Rc'd file objects and properly managing reference integrity viz-a-viz kernel state before the application caller ever sees the first dequeued event in a cycle. This is a perennial issue with event loop libraries and buggy application code (in every language). One can't just deal with raw file descriptors, call the close syscall directly, etc, hoping to keep state consistent implicitly. There's an unavoidable tie-in needed between application's wrappers around low-level resources and the event loop in use.
taeric•8mo ago
I'm not entirely clear on what the proposal is at the end? Seems that the long term answer as to "which of these implications to pursue" is "all of them?" Simply taking in a batch of instructions doesn't immediately change much? You still have to be able to do each of the other things. And you will still expect some dependencies between batches that could possibly interact in the same ways.

In a sense, this is no different than how your processor is dealing with instructions coming in. You will have some instructions that can be run without waiting on previous ones. You will have some that can complete quickly. You will have some that are stalled on other parts of the system. (I'm sure I could keep wording an instruction to match each of the implications.)

To that end, part of your program has to deal with taking off "whats next" and finding how to prepare that to pass to the execution portion of your program. You can make that only take in batches, but you are almost certainly responsible for how you chunk them moreso than whatever process is sending the instructions to you? Even if you are handed clear batches, it is incumbent on you to batch them as they go off to the rest of the system.

lmz•8mo ago
I guess the proposal is "instead of fetching and acting on one event at a time, consider fetching all available events and look for opportunities to optimize which ones you process (e.g. by prioritization or by skipping certain events if superseded by newer ones)".
taeric•8mo ago
I mean, I got that. But you could as easily say "instead of fetching and acting on one event at a time, fetch and triage/route instructions into applicable queues."

In particular, there is no guarantee that moving to batches changes any of the problems you may have from acting on a single one at a time. To that end, you will have to look into all of the other strategies sooner or later.

Following from that, the problem is not "processMessage" or whatever. The problem is that you haven't broken "processMessage" up into the constituent "receive/triage/process/resolve" loop that you almost certainly will have to end up with.

malkia•8mo ago
in CPU's - pipelining!
jchw•8mo ago
I believe something similar is going on internally in Windows with event queues. It coalesces and prioritizes input events when multiple of them pile up before you're able to pop new events off of the queue. (For some events, e.g. pointer events, you can even go and query frames that were coalesced during input handling.) On the application/API end, it just looks like a "scalar select" loop, but actually it is doing batching behavior for input events!

(On the flip side, if you have a Wayland client that falls behind on processing its event queue, it can crash. On the whole this isn't really that bad but if you have something sending a shit load of events it can cause very bad behavior. This has made me wonder if it's possible, with UNIX domain sockets, to implement some kind of event coalescing on the server-side, to avoid flooding the client with high-precision pointer movement events while it's falling behind. Maybe start coalescing when FIONREAD gets to some high watermark? No idea...)