And it's all for, what? A little memory for thread stacks (most of which ends up being a wash because of all the async contexts being tossed around anyway -- those are still stacks and still big!)? Some top-end performance for people chasing C10k numbers in a world that has scaled into datacenters for a decade anyway?
Not worth it. IMHO it's time to put this to bed.
[1] No one in that thread or post has a good summary, but it's "Rust futures consume wakeup events from fair locks that only emit one event, so can deadlock if they aren't currently being selected and will end up waiting for some other event before doing so."
It's not that they can't be used productively. It's that they probably do more harm than good on balance. And I think async mania is getting there. It was a revelation when node showed it to us 15 years ago. But it's evolved in bad directions, IMHO.
But "invented" and "revealed" are different verbs for a reason. The release of node.js and it's pervasively async architecture changed the way a lot of people thought about how to write code. For the better in a few ways. But the resulting attempt to shoe-horn the paradigm into legacy and emerging environments that demanded it live in a shared ecosystem with traditional "blocking" primitives and imperative paradigms has been a mess.
Basically, the overhead of exceptions is probably less than handling the same errors manually in any non-trivial program.
Also, it's not like these table doesn't exist in other languages. Both Rust and Go have to unwind.
With the benefit of hindsight, explicit handling and unwinding has proven to be safer and more reliable.
Knowing if a function will yield the thread is actually extremely relevant knowledge you want available.
Node is one place where async-await has zero counter arguments and every alternative is strictly worse.
Node is not a web page, so no reason to limit it to the same patterns.
Then, the next issue would be thread safety. But that could be treated as a separate problem.
The motivation for node was that users wanted to use JavaScript on the server.
What do you mean? A JS runtime can't do anything useful on its own, it can't read files, it can't load dependencies because it doesn't know anything about "node_modules", it can't open sockets or talk to the world in any other way - that's what Node.js provides.
> I would claim it was the only low-effort model available to them and therefore not motivation.
It was a headline feature when it released.
https://web.archive.org/web/20100901081015/https://nodejs.or...
Node dictates that when faced with an async function the result is that I must either implement async myself so I can do await or go into callback rabbit holes by doing .then(). If the function author is nice, they will give me both async and sync versions: readFile() and readFileSync(). But that sucks.
The alternative would be that 1) the decision to go async were mine; 2) the language supports my decision with syntax/semantics.
Ie. if I call the one and only fs.readFile() and want to block I would then do
sync fs.readFile()
Node would take care of performing a nice synchronous call that is beneficial to its event-loop logic and callback pyramid. End of the story. And not some JS an implementation such as deasync [1] but in core Node.No, just callbacks and event handlers (and an interface like select/poll/epoll/kqueue for the OS primitives on which you need to wait). People were writing threadless non-blocking code back in the 80's, and while no one loved the paradigm it was IMHO less bad than the mess we've created trying to avoid it.
One of the problems I'm trying to point out is that we're so far down the rabbit hole in this madness that we've forgotten the problems we're actually trying to solve. And in particular we've forgotten that they weren't that hard to begin with.
You can do that. If you don't await an async call, you have a future object that you can handle however you want.
The sync code might be running in an async context. Your async context might only have one thread. The task you're waiting for can never start because the thread is waiting for it to finish. Boom, you're deadlocked.
Async/await runtimes will handle this because awaiting frees the thread. So, the obvious thing to do is to await but then it gets blamed for being viral.
Obviously busy waiting in a single threaded sync context will also explode tho...
Zig's colorless async was purely solving the unergonomic calling convention, at the cost of knowing if a function is async or not (compiler decides, does not give any hints and if you get it wrong then that's UB).
Arguably the main problem with async is that it is unergonomic. You always have to act like there were 2 types of functions, while, in practice, these 2 types are almost always self-evident and you can treat sync and async functions the same.
When you know what functions and blocks are synchronous, you know the thread will not be yielded. If you direct async tasks to run on a single thread, you know they will never run concurrently. These together mean you can use that pattern to get lock free critical sections. You don't need to write thread-safe data structures.
If a function can yield implicitly, how do you have the control you need to pull this off?
It's a really common pattern in GUI dev so how does Zig handle that?
Constness is infectious down the stack (the callee of a const function must be const) while asyncness is infectious up the stack (the caller of an async function must be async). So you can gradually add constness to subsections of a codebase while refactoring, only touching those local parts of the codebase. As opposed to async, where adding a single call to an async function requires you to touch all functions back up to main
I don’t have anything against async, I see the value of event-oriented “concurrency”, but the complaint that async is a poison pill is valid, because the use of async fundamentally changes the execution model to co-operative multitasking, with possible runtime issues.
If a language chooses async, I wish they’d just bite the bullet and make it obvious that it’s a different language / execution model than the sync version.
Calling sync code from async is fine in and of itself, but once you're in a problem space where you care about async, you probably also care about task starvation. So naively, you might try to throw yeilds around the code base.
And your conclusion is you want the language to be explicit when you're async....so function coloring, then?
Javascript's async as of ten years ago just happened to be an especially annoying implementation of a specific effect.
When is this relevant beyond pleasing the compiler/runtime? I work in C# and JS and I could not care less. Give me proper green threads and don't bother with async.
If yields are implicit, you don't have enough control to really pull that off.
Maybe it's possible but I haven't seen a popular green threaded UI framework that let's you run tasks in background threads implicitly. If I need to call a bunch of code to explicitly parcel background work, that just ends up being async/await with less sugar.
I don't understand why async code is being treated as dangerous or as rocket science. You still maintain complete control, and it's straightforward.
Now that we know about the "futurelock" issue, it will be addressed.
I'm sure Rust and the cargo/crates ecosystem will even grow the ability to mark crates as using async so if you really care to avoid them in your search or blow up at compile time upon import, you can. I've been asking for that feature for unsafe code and transitive dependency depth limits.
I often write Rust and I don't find it very attractive, but so many good projects seem to advertise it as a "killer feature". Diesel.rs doesn't have async, and they claim that perf improvement may not be worth it (https://users.rust-lang.org/t/why-use-diesel-when-its-not-as...).
For a single threaded JS program, async makes a lot of sense. I can't imagine any alternative pattern to get concurrency so cleanly.
Because when you require 1 thread per 1 connection, you have trouble getting to thousands of active connections and people want to scale way beyond that. System threads have overhead that makes them impractical for this use case. The alternatives are callbacks, which everybody hates and for a good reason. Then you have callbacks wrapped by Futures/Promises. And then you have some form of coroutines.
Keeping in mind that what Zig is introducing is not what languages call async/await. It's more like the I/O abstraction inside Java, where you can use the same APIs with platform threads and virtual threads, but in Zig, you will need to pass the io parameter around, in Java, it's done in the background.
No. The alternative is lightweight/green threads and actors.
The thing with await is that it can be retrofitted onto existing languages and runtimes with relatively little effort. That is, it's significantly less effort than retrofitting an actual honest-to-god proper actor system a la Erlang.
Javascript's async/await probably started as a sugar for callbacks (since JS is single-threaded). Many others definitely have that as sugar for whatever threading implementation they have. In C# it's sugar on top of the whole mechanism of structured concurrency.
But I'm mostly talking out of my ass here, since I don't know much about this topic, so everything above is hardly a step above speculation.
How lightweight should threads be to support high scale multitasking?
Writing my own language, capturing stack frames in continuations resulted in figures like 200-500 bytes. Grows with deeply nested code, of course, but surely this could be optimized...
https://www.erlang.org/docs/21/efficiency_guide/processes.ht...
This document says Erlang processes use 309 words which is in the same ballpark.
Erlang also enjoys quite a lot of optimizations on the VM level. E.g. a task is parked/hybernated if there's no work for it to perform (e.g. it's waiting for a message), the switch between tasks is extremely lightweight, VM internals are re-entrant, employ CPU-cache-friendly data structures, garbage collection is both lightweight and per-thread/task etc.
Those are all some form of coroutines.
The event loop model is arguably equivalent to coroutines. Just replace yield with return and have the underlying runtime decide which functions to call next by looping through them in a list. You can even stall the event loop and increase latency if you take too long to return. It's cooperative multitasking by another name.
(All modern OSes in common use are 1970s vintage under the hood. All Unix is Bell Labs Unix with some modernization and veneer, and NT is VMS with POSIX bolted on later.)
Go does this by shipping a mini VM in every binary that implements M:N thread pooling fibers in user space. The fact that Go has to do this is also a workaround for OS APIs that date back to before disco was king, but at least the programmer doesn’t have to constantly wrestle with it.
Our whole field suffers greatly from the fact that we cannot alter the foundation.
BTW I use Rust async right now pretty heavily. It strikes me as about as good as you can do to realize this nightmare in a systems language that does not ship a fat runtime like Go, but having to actually see the word “async” still makes me sad.
Our game engine has a in-house implementation - creating a fiber, scheduling it, and waiting for it to complete takes ~300ns on my box. Creating a OS thread and join()ing is just about 1000 slower, ~300us.
https://github.com/lalinsky/zio/blob/main/src/coroutines.zig
Which has the benefit of Zig single unit of compilation, that the compiler can be smarter about which registers need to be saved.
It is hard to describe just how much more can be done on a single thread with just async.
I can't say why Diesel.rs doesn't need async, and I would like to point out that I know very little about Diesel.rs beyond the fact that it has to do with databases. It would seem strange that, anything, working with databases which an I/O heavy workload would not massively benefit from async though.
Shouldn't the OS kernel innovate in this area instead of different languages in userland attempting to solve it?
You can implement stackful coroutines yourself in C/C++, you need like 30 lines of assembly (as you can't switch stack pointers and save registers onto the stack from most languages). This is WAY better than what you could do for example with the way more convoluted C++ co_async/co_await for two reasons:
1. Your coroutine has an actual stack - you don't have to allocate a new "stack frame" on the heap for every "stack frame", e.g. every time you call a function and await it.
2. You don't need special syntax for awaiting - any function can just call your Yield() function, which just saves the registers onto the stack and jumps out of the coroutine.
Minicoro [1] is a single-file library that implement this in C. I have yet to dig into the Zig implementation - maybe it's better than the C++/Rust ones, but the fact they call it "async/await" doesn't bring me much hope.
Huh? It’s not like the entire array was passed into each task. Each task just received a pointer to an usize to write to.
Where is concurrent data writing in the example?
To speak to the Zig feature: as a junior I kept bugging the seniors about unit testing and how you were supposed to test things that did IO. An explanation of "functional core imperative shell" would have been helpful, but their answer was: "wrap everything in your own classes, pass them everywhere, and provide mocks for testing". This is effectively what Zig is doing at a language level.
It always seemed wrong to me to have to wrap your language's system libraries so that you could use them the "right way" that is testable. It actually turns out that all languages until Zig have simply done it wrong, and IO should be a parameter you pass to any code that needs it to interact with the outside world.
barddoo•3h ago