Hello, Hacker News readers. My name is Ilya Egorov, and for the past year I have been developing a somewhat esoteric library for Python. The idea behind it is quite simple: primitives (such as locks) that work in different environments at the same time (for example, an asyncio thread and any other threads), one instance for everything (both async-aware and thread-aware/thread-safe), without complicating things for the user.
It all started sometime around 2022-2023, when I was writing a bridge between different messengers (APIs) on AnyIO for personal use. At the time, I wanted to achieve fairness in processing different chats, but part of the work was CPU-bound, and isolating it into separate synchronous threads would have led to noticeable overhead. The most logical thing that came to mind at that moment was to parallelize all asynchronous code, one "asynchronous" thread for each chat (nowadays, such ideas have been popularized with the advent of free-threading), in order to take advantage of preemptive multitasking. But how to synchronize and communicate between different tasks in different threads when none of the available asynchronous primitives are thread-safe? I did not like any of the workarounds known at the time, so I started writing my own.
In 2023, I tried various approaches to creating a universal lock, which I called an "atomic lock" (since I planned to implement other primitives I needed on top of it). The principle of operation is quite simple: each instance uses collections.deque as a token queue (lock owners; something very similar is used in the current implementation of aiologic.lowlevel.ThreadOnceLock) and threading.Lock to synchronize the internal logic. It seemed like everything was working, and I could have stopped there, but the latter bothered me. As long as my lock used threading.Lock, known for blocking event loops and thus causing timeout issues, there could be no question of its fairness. However, all I could achieve at that point was to define the lock recurrently (where the event loop blocking time decreased with increasing nesting but increased overhead for each call).
In the spring of 2024, after digging deep into atomic programming (especially lock-free and wait-free queues), it hit me that something similar could be implemented at the Python level. I did not decide to use effectively atomic operations right away, since almost no one had done so before me (the information available on them is very poor, and I had to research it all myself), but after seeing how reliable they are (they are even used in standard CPython tests, and free-threading fixes are also related to them), I created the first primitive that works without synchronization of internal logic. My worldview changed, and I went on to create other primitives, also without synchronization. A few months later, in August 2024, aiologic was born.
Now, a year later, the library is already quite well developed. It supports a wide variety of concurrency libraries, whether it is gevent or Trio, and everything with as many threads as you want. There are locks, semaphores, queues, and others. A brief description is somewhat humorous: "GIL-powered" but supports free-threading (in essence, this is about effectively atomic operations, which is very interesting on PyPy - see the classic benchmark); "locking" but almost never uses locks under the hood (except in some special cases). The development status is still alpha, but in my opinion, the reliability is quite high, as I constantly recheck the implementation with proactive bug fixing (see the overly detailed logs).
I hope you find my library interesting.