But coroutines still interleave execution at every await point, so shared mutable state can become just as fragile as in multithreaded code — the scheduling boundary just moves from OS threads to cooperative yield points.
In practice that tends to push designs toward queues, actors, or message-passing patterns if you want to avoid subtle state corruption.
With traditional locking, the locked segment is usually very clear. It's possible to use race detectors to verify that objects are accessed with consistent locking. The yield points are also clear.
With async stuff, ANY await point can change ANY state. And await points are common, even sometimes for things like logging. There are also no tools to verify the consistent "locking".
So I often spend hours staring blankly at logs, trying to reconstruct a possible sequence of callbacks that could have led to a bug. E.g.: https://github.com/expo/expo/issues/39428
Being careful about what functions you call is quite fragile and tedious, and doesn't compose well: what if a library changes when it adds a yield point?
Overall, async/await is a result of people programming like it's 2003, when threads were still very expensive.
rather than "What Python's asyncio primitives get wrong" this seems more like "why we chose one asyncio primitive (queue) instead of others (event and condition)"
also, halfway through the post, the problem grows a new requirement:
> Instead of waking consumers and asking "is the current state what you want?", buffer every transition into a per-consumer queue. Each consumer drains its own queue and checks each transition individually. The consumer never misses a state.
if buffering every state change is a requirement, then...yeah, you're gonna need a buffer of some kind. the previous proposed solutions (polling, event, condition) would never have worked.
given the full requirements up-front, you can jump straight to "just use a queue" - with the downside that it would make for a less interesting blog post.
also, this is using queues without any size limit, which seems like a memory leak waiting to happen if events ever get enqueued more quickly than they can be consumed. notably, this could not happen with the simpler use cases that could be satisfied by events and conditions.
> A threading.Lock protects the value and queue list.
unless I'm missing something obvious, this seems like it should be an asyncio.Lock?
https://docs.python.org/3/library/queue.html#queue.Queue.tas...
TZubiri•2h ago
There's so many solutions in the middle, I have this theory that most people that get into async don't really know what threading is. Maybe they have a world vision where before 2023 python just could not do more than one thing at once, that's what the GIL was right? But now after 3.12 Guido really pulled himself by the bootstraps and removed the GIL and implemented async and now python can do more than one thing at a time so they start learning about async to be able to do more than one thing at a time.
This is a huge disconnect between what python devs are actually building, a different api towards concurrency. And some junior devs that think they are learning bleeding edge stuff when they are actually learning fundamentals through a very contrived lens.
It 100% comes from ex-node devs, I will save the node criticism, but node has a very specific concurrency model, and node devs that try out python sometimes run to asyncio as a way to soften the learning curve of the new language. And that's how they get into this mess.
The python devs are working on these features because they have to work on something, and updates to foundational tech are supposed to have effects in decades, it's very rare that you need to use bleeding edge features. In 95% of the cases, you should be restricting yourself to using features from versions that are 5-10 years old, especially if you come from other languages! You should start old to new, not new to old.
Sorry, for the rant, or if I misjudged, making a broader claim based on multiple perspectives.
dbt00•2h ago
scuff3d•12m ago
Python's asyncio library is single threaded, so I'm not sure why you are talking about threads and asyncio like they have anything to do with each other.
Python has been able to do more then one thing at a time for a long time. That's what the multiprocess library is for. It's not an ideal solution, but it does exist.