frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
44•ColinWright•52m ago•11 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
53•thelok•3h ago•6 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
14•surprisetalk•1h ago•7 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
119•AlexeyBrin•7h ago•22 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
90•alephnerd•1h ago•33 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
54•vinhnx•4h ago•7 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
822•klaussilveira•21h ago•248 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
99•1vuio0pswjnm7•8h ago•114 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1057•xnx•1d ago•607 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
75•onurkanbkrc•6h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
475•theblazehen•2d ago•175 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
198•jesperordrup•11h ago•68 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
543•nar001•5h ago•252 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
212•alainrk•6h ago•328 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
34•rbanffy•4d ago•6 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
27•marklit•5d ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
113•videotopia•4d ago•30 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
72•speckx•4d ago•74 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
66•mellosouls•4h ago•72 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•21h ago•37 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•111 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
285•dmpetrov•22h ago•153 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
554•todsacerdoti•1d ago•268 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
42•matt_d•4d ago•17 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
472•lstoll•1d ago•310 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•214 comments
Open in hackernews

Async Mutexes

https://matklad.github.io/2025/11/04/on-async-mutexes.html
69•ingve•3mo ago

Comments

pwnna•2mo ago
For single-threaded, cooperative multitasking systems (such as JavaScript and what OP is discussing), async mutexes[1] IMO are a strong anti pattern and red flag in the code. For this kind of code, every time you execute code it's always "atomic" until you call an await and effectively yield to the event loop. Programming this properly simply requires making sure the state variables are consistent before yielding. You can also reconstruct the state at the beginning of your block, knowing that nothing else can interrupt your code. Both of these approaches are documented in the OP.

Throwing an async mutex to fix the lack of atomicity before yielding is basically telling me that "i don't know when I'm calling await in this code so might as well give up". In my experience this is strongly correlated with the original designer not knowing what they are doing, especially in languages like JavaScript. Even if they did understand the problem, this can introduce difficult-to-debug bugs and deadlocks that would otherwise not appear. You also introduce an event queue scheduling delay which can be substantial depending on how often you're locking and unlocking.

IMO this stuff is best avoided and you should just write your cooperative multi-tasking code properly, but this is does require a bit more advanced knowledge (not that advanced, but maybe for the JS community). I wish TypeScript would help people out here but it doesn't. Calling an async function (or even normal functions) does not invalidate type narrowing done on escaped variables probably for usability reasons, but is actually the wrong thing to do.

[1]: https://www.npmjs.com/package/async-mutex

michaelsbradley•2mo ago
Also:

Developers should properly learn the difference between push and pull reactivity, and leverage both appropriately.

Many, though not all, problems where an async mutex might be applied can instead be simplified with use of an async queue and the accompanying serialization of reactions (pull reactivity).

zeroq•2mo ago
It's kids taking toys of the shelve and playing with words. Give me a strong Angular vibe with it's miss use of "dependency injection".
paulddraper•2mo ago
How did angular misuse DI?
fjwufajsd•2mo ago
I'm not familiar with AngularJS so I did a quick google: https://angular.dev/guide/di#where-can-inject-be-used

It looks eerily familiar to Spring DI framework, yikes.

never_inline•2mo ago
Here's a use case - singleton instantiation on first request, where instantiation itself requires an async call (eg: to DB or external service).

    _lock = asyncio.Lock()
    _instance = None

    def get_singleton():
      if _instance:
        return _instance
      async with _lock:
        if not _instance:
          _instance = await costly_function()
      return _instance
How do you suggest to replace this?
duped•2mo ago
The traditional thing would be to have an init() function that is required to be called at the top of main() or before any other methods that need it. But I agree with your point.
never_inline•2mo ago
Now lets say its an async cache instead of singleton.
duped•2mo ago
Return the cached item if it exists else spawn a task to update it (doing nothing if the task has already been spawned), await the task and return the cached item.
never_inline•2mo ago
Thanks, that's a useful trick.
didibus•2mo ago
I'm not fully able to follow what the article is trying to say.

I got especially confused at the actor part, aren't actors share nothing by design?

It does to say there's only 1 actor, so I guess it has nothing to do with actors? They're talking about GenServer-like behavior within a single actor.

As I write this, I'm figuring out maybe they mean it's like a single actor GenServer sending messages to itself, where each message will update some global state. Even that though, actors don't yield inside callbacks, if you send a message it gets queued to run but won't suspend the currently running callback, so the current method will run to completion and then the next will run.

Erlang actors do single-threaded actors with serialized message processing. If I understand the article, that avoids the issue it brings up completely as you cannot have captured old state that is stale when resuming.

Ygg2•2mo ago
> I got especially confused at the actor part, aren't actors share nothing by design?

Me as well. I thought actors are a bit like mailboxes that process and send their messages.

rdtsc•2mo ago
It’s half-baked actors. Well not baked at all (“no-bake actors”?) as isolated heaps and shared nothing is not easy to accomplish. OS processes can do it but they can be heavyweight. BEAM VM does a great job of it, though.
surajrmal•2mo ago
Share nothing is often too coarse grained serialization for all situations. Using a lock or other shared memory primitive (RCU, hazard pointers, etc) can lead to better performance due to smaller granularity of data protected and latency necessary to access the data. Optimizing for latency is very different from optimizing for throughput.
rdtsc•2mo ago
There every complex enough distributed system will eventually implement Erlang but in an ad-hoc and not very good way. Quite a lot of systems ended up with actors. TigerBeetle in Zig, FoundationDB in C++. But they miss critical aspect of it and that’s isolated memory between processors. And they don’t know how to yield unless they do it on IO or do cooperative yielding. The global shared heap is really the dangerous part though. Rust could avoid the danger by asserting some guarantees at compile time.
hawk_•2mo ago
I can't imagine those examples you picked implemented in Erlang performing anywhere close to the Zig/C++ ones. So the "ad-hoc subset" there is by design.
rdtsc•2mo ago
It is ad-hoc because then they write blog posts wondering “how come we still need mutexes, we thought asyno operations don’t need them”.
James_K•2mo ago
They have realised that, in single-threaded code, an implicit mutex is created between each call to await, and therefore state variables can usually be organised in such a way to avoid explicit mutexes. Of course, one strongly suspects that such code will involve a lot of boolean variables with names such as “exclude_other_accesses_to_data”.
gsf_emergency_4•2mo ago
So..not even wait-free? (Since you still have locks)

https://en.wikipedia.org/wiki/Non-blocking_algorithm#Obstruc...

mrkeen•2mo ago
I don't get this:

  the entire Compaction needs to be wrapped in a mutex, and every callback needs to start with lock/unlock pair ... So explicit locking probably gravitates towards having just a single global lock around the entire state, which is acquired for the duration of any callback.
This is just dining philosophers.

When I compact, I take read locks on victims A and B while I use them to produce C. So callers can still query A and B during compaction. Once C is produced, I atomically add C and remove A and B, so no caller should see double or missing records.

user28712094•2mo ago
Curious how to async walk a tree which may or may not be under edit operations without a mutex
surajrmal•2mo ago
There are RCU safe trees like the maple tree in Linux: https://docs.kernel.org/core-api/maple_tree.html

The tl;dr is that it uses garbage collection to let readers see the older version of the tree they were walking while the latest copy still gets updated.

I read a paper recently about concurrent interval skip lists the other day as well which was interesting.

halayli•2mo ago
That's not the right abstraction. What op needs is barrier/latch.
bheadmaster•2mo ago
Heh, I'm not the only one who always wrote his own async mutex for any non-trivial async codebase.

It may seem like single-threaded execution doesn't need mutexes, but it is a fact that async/await is just another implementation of userspace threads. And like all threads, it may lead to data races if you have yield points (await) inside a critical section.

People may say what they want about bad design and reactive programming, but thread model is inherently easier to reason about.

surajrmal•2mo ago
Yeah, if you reason about each task as a thread, then all the same problems with sharing data between threads applies to sharing data between tasks, regardless of single threaded runtime or not. The only real difference is you can avoid atomics. Generally, I advocate for avoiding sharing any non-trivial state which would necessitate holding a "lock" across an await point (there are rust lints to help you here if you use RefCell). This usually requires an actor pattern to explicitly queue up work that requires that state into its own self contained task rather than reach for an async mutex.
ggygygy•2mo ago
How are hackers