frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Async Rust never left the MVP state

https://tweedegolf.nl/en/blog/237/async-rust-never-left-the-mvp-state
109•pjmlp•2h ago•57 comments

Lessons for Agentic Coding: What should we do when code is cheap?

https://www.dbreunig.com/2026/05/04/10-lessons-for-agentic-coding.html
46•ingve•2h ago•37 comments

Hand Drawn QR Codes

https://sethmlarson.dev/hand-drawn-qr-codes
94•jollyjerry•5h ago•10 comments

Bun is being ported from Zig to Rust

https://github.com/oven-sh/bun/commit/46d3bc29f270fa881dd5730ef1549e88407701a5
494•SergeAx•8h ago•349 comments

CVE-2026-31431: Copy Fail vs. rootless containers

https://www.dragonsreach.it/2026/05/04/cve-2026-31431-copy-fail-rootless-containers/
100•averi•5h ago•33 comments

How OpenAI delivers low-latency voice AI at scale

https://openai.com/index/delivering-low-latency-voice-ai-at-scale/
405•Sean-Der•13h ago•123 comments

Train Your Own LLM from Scratch

https://github.com/angelos-p/llm-from-scratch
235•kristianpaul•5h ago•25 comments

Google Chrome silently installs a 4 GB AI model on your device without consent

https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/
163•john-doe•2h ago•178 comments

Agent Skills

https://addyosmani.com/blog/agent-skills/
252•BOOSTERHIDROGEN•11h ago•112 comments

Farewell to a Giant of Botany

https://nautil.us/farewell-to-a-giant-of-botany-1280409
22•Brajeshwar•2d ago•0 comments

Empty Screenings – Finds AMC movie screenings with few or no tickets sold

https://walzr.com/empty-screenings
143•MrBuddyCasino•5h ago•116 comments

Why I Created phpc.tv

https://afilina.com/why-phpc-tv
12•luu•1d ago•1 comments

The Frog for Whom the Bell Tolls

https://sethmlarson.dev/the-frog-for-whom-the-bell-tolls
8•anujbans•2h ago•1 comments

Gap between national food production and food-based dietary guidance (2025)

https://www.nature.com/articles/s43016-025-01173-4
59•simonebrunozzi•20h ago•37 comments

pgxbackup: Continuity Support for pgBackRest

https://thebuild.com/blog/2026/05/01/pgxbackup-continuity-support-for-pgbackrest/
48•Wingy•2d ago•4 comments

Securing a DoD contractor: Finding a multi-tenant authorization vulnerability

https://www.strix.ai/blog/how-strix-found-zero-auth-vulnerability-dod-backed-startup
197•bearsyankees•15h ago•81 comments

Does Employment Slow Cognitive Decline? Evidence from Labor Market Shocks

https://www.nber.org/papers/w35117
286•littlexsparkee•18h ago•267 comments

2-D Mathematical Curves

https://www.2dcurves.com/
30•the-mitr•5h ago•1 comments

Mouse Pointer as a Mere Mortal

https://unsung.aresluna.org/mouse-pointer-as-a-mere-mortal/
5•zdw•2d ago•0 comments

When Networking Doesn't Work

https://www.os2museum.com/wp/when-networking-doesnt-work/
63•kencausey•12h ago•11 comments

Redis array: short story of a long development process

https://antirez.com/news/164
282•antirez•19h ago•97 comments

Kids bypass age verification with fake moustaches

https://www.theregister.com/2026/05/04/uk_online_safety_act_age_checks_subvert/
90•dreadsword•4h ago•49 comments

Talking to strangers at the gym

https://thienantran.com/talking-to-35-strangers-at-the-gym/
1352•thitran•21h ago•643 comments

Testing macOS on the Apple Network Server 2.0 ROMs

http://oldvcr.blogspot.com/2026/05/testing-macos-on-apple-network-server.html
86•zdw•1d ago•17 comments

1966 Ford Mustang Converted into a Tesla with Working 'Full Self-Driving'

https://electrek.co/2026/05/02/tesla-1966-mustang-ev-conversion-full-self-driving/
169•Brajeshwar•18h ago•126 comments

Microsoft Edge stores all passwords in memory in clear text, even when unused

https://twitter.com/L1v1ng0ffTh3L4N/status/2051308329880719730
541•cft•15h ago•193 comments

PyInfra 3.8.0

https://github.com/pyinfra-dev/pyinfra/releases/tag/v3.8.0
282•wowi42•20h ago•92 comments

I am worried about Bun

https://wwj.dev/posts/i-am-worried-about-bun/
486•remote-dev•16h ago•317 comments

Biscuit

https://github.com/yattsu/biscuit
39•unixfg•6h ago•0 comments

How Monero’s proof of work works

https://blog.alcazarsec.com/tech/posts/how-moneros-proof-of-work-works
281•alcazar•19h ago•199 comments
Open in hackernews

Async Rust never left the MVP state

https://tweedegolf.nl/en/blog/237/async-rust-never-left-the-mvp-state
107•pjmlp•2h ago

Comments

hmry•1h ago
Great article! Love these types of deep dives into optimizations. Hope the project goal works out!

I've felt before that compilers often don't put much effort into optimizing the "trivial" cases.

Overly dramatic title for the content, though. I would have clicked "Async Rust Optimizations the Compiler Still Misses" too you know

Animats•1h ago
Agree on title. Too dramatic.

The author seems to be obsessing about the overhead for trivial functions. He's bothered by overhead for states for "panicked" and "returned". That's not a big problem. Most useful async blocks are big enough that the overhead for the error cases disappears.

He may have a point about lack of inlining. But what tends to limit capacity for large numbers of activities is the state space required per activity.

molf•1h ago
> Most useful async blocks are big enough that the overhead for the error cases disappears.

Is it really though?

In my experience many Rust applications/libraries can be quite heavy on the indirection. One of the points from the article is that contrary to sync Rust, in async Rust each indirection has a runtime cost. Example from the article:

    async fn bar(blah: SomeType) -> OtherType {
       foo(blah).await
    }
I would naively expect the above to be a 'free' indirection, paying only a compile-time cost for the compiler to inline the code. But after reading the article I understand this is not true, and it has a runtime cost as well.
swiftcoder•1h ago
> Most useful async blocks are big enough that the overhead for the error cases disappears.

Most useful async blocks are deeply nested, so the overhead compounds rapidly. Check the size of futures in a decently large Tokio codebase sometime

dathinab•44m ago
> Agree on title. Too dramatic.

not just too dramatic

given that all the things they list are

non essential optimizations,

and some fall under "micro optimizations I wouldn't be sure rust even wants",

and given how far the current async is away from it's old MVP state,

it's more like outright dishonest then overly dramatic

like the kind of click bait which is saying the author does cares neither about respecting the reader nor cares about honest communication, which for someone wanting to do open source contributions is kinda ... not so clever

through in general I agree rust should have more HIR/MIR optimizations, at least in release mode. E.g. its very common that a async function is not pub and in all places directly awaited (or other wise can be proven to only be called once), in that case neither `Returned` nor `Panicked` is needed, as it can't be called again after either. Similar `Unresumed` is not needed either as you can directly call the code up to the first await (and with such a transform their points about "inlining" and "asyncfns without await still having a state machine" would also "just go away"TM, at least in some places.). Similar the whole `.map_or(a,b)` family of functions is IMHO a anti-pattern, introducing more function with unclear operator ordering and removal of the signaling `unwrap_` and no benefits outside of minimal shortening a `.map(b).unwrap_or(a)` and some micro opt. is ... not productive on a already complicated language. Instead guaranteed optimizations for the kind of patterns a `.map(b).unwrap_or(a)` inline to would be much better.

algesten•41m ago
He's optimizing for embedded no-std situation. These things do matter in constrained environments.
repelsteeltje•15m ago
> [...] That's not a big problem [...]

Depends somewhat on your expectations, I suppose. Compared to Python, Java, sure, but Rust off course strives to offer "zero-cost" high level concepts.

I think the critique is in the same realm of C++'s std::function. Convenience, sure, but far from zero-cost.

groundzeros2015•1h ago
Async seems like an underbaked idea across the board. Regular code was already async. When you need to wait for an async operation, the thread sleeps until ready and the kernel abstracts it away. But We didn’t like structuring code into logical threads, so we added callback systems for events. Then realized callbacks are very hard to reason about and that sequential control is better.

So threads was the right programming model.

Now language runtimes prefer “green threads” for portability and performance but most languages don’t provide that properly. Instead we have awkward coloring of async/non-async and all these problems around scheduling, priority, and no-preemption. It’s a worse scheduling and process model than 1970.

swiftcoder•1h ago
> So threads was the right programming model.

For problems that aren't overly concerned with performance/memory, yes. You should probably reach for threads as a default, unless you know a priori that your problem is not in this common bucket.

Unfortunately there is quite a lot of bookkeeping overhead in the kernel for threads, and context switches are fairly expensive, so in a number of high performance scenarios we may not be able to afford kernel threading

groundzeros2015•1h ago
In that sentence I’m referring to the abstract idea of a thread of execution as a model of programming, not OS threads. A green thread implementation could do it too.

But what you said about kernel implementation is true. But are we really saying that the primary motivation for async/await is performance? How many programmers would give that answer? How many programs are actually hitting that bottleneck?

Doesn’t that buck the trend of every other language development in the past 20 years, emphasizing correctness and expressively over raw performance?

swiftcoder•1h ago
> But are we really saying that the primary motivation for async/await is performance?

The original motivation for not using OS threads was indeed performance. Async/await is mostly syntax sugar to fix some of the ergonomic problems of writing continuation-based code (Rust more or less skipped the intermediate "callback hell" with futures that Javascript/Python et al suffered through).

nchie•1h ago
> But are we really saying that the primary motivation for async/await is performance?

Of course - what else would it be? The whole async trend started because moving away from each http request spawning (or being bound to) an OS thread gave quite extreme improvements in requests/second metrics, didn't it?

groundzeros2015•59m ago
I agree. Managing many http requests or responses was a motivating problem.

What I question is whether 1. Most programs resemble that, so that they make it an invasive feature of every general purpose language. 2. Whether programmers are making a conscious choice because they ruled out the perf overhead of the simpler model we have by default.

swiftcoder•46m ago
That is why we have the function colouring problem and a split ecosystem in the first place - if it were obviously better in all cases, we'd make async the default, and get rid of the split altogether (and there are languages, like Erlang, that fall on this side of the fence)
sureglymop•30m ago
Importantly though, performance might be worse depending on use case and program. Specifically with scheduling in user space it can negatively impact branch prediction as your CPU is already hyper optimized for doing things differently.

It's all nuanced and what to choose requires careful evaluation.

pkolaczk•1h ago
Threads are neither better or worse than async+callbacks. They are different. There are problems which map nicely to threads and there are problems which are much nicer to express with async.
groundzeros2015•1h ago
Such as? The entire premise of async is that callbacks were a mistake because they broke sequential reasoning and control.

Every explanation of the feature starts with managing callback hell.

codedokode•18m ago
The callbacks should be just hidden from programmer, that's what async/await are for.
repelsteeltje•6m ago
Beware, they are different concepts.

Threads offer concurrent execution, async (futures) offer concurrent waiting. Loosely speaking, threads make sense for CPU bound problems, while async makes sense for IO bound problems.

nananana9•1h ago
> the thread sleeps until ready and the kernel abstracts it away.

Sure, but once you involve the kernel and OS scheduler things get 3 to 4 orders of magnitude slower than what they should be.

The last time I was working on our coroutine/scheduling code creating and joining a thread that exited instantly was ~200us, and creating one of our green threads, scheduling it and waiting for it was ~400ns.

You don't need to wait 10 years for someone else to design yet another absurdly complex async framework, you can roll your own green threads/stackful coroutines in any systems language with 20 lines of ASM.

groundzeros2015•1h ago
1. Why can’t we have better green threads implementations with better scheduling models?

2. Unchecked array operations are a lot faster. Manual memory management is a lot faster. Shared memory is a lot faster.

Usually when you see someone reach for sharp and less expressive tools it’s justified by a hot code path. But here we jump immediately to the perf hack?

3. How many simultaneous async operations does your program have?

vlovich123•58m ago
Well, if you offload heavy compute into an async task, then usually it depends strictly on how many concurrent inputs you are given. But even something as “simple” as a performance editor benefits from this if done well - that’s why JS text editors have reasonably acceptable performance whereas Java IDEs always struggled (historically anyway since even Java has adopted green threads).
groundzeros2015•45m ago
You think IDEs are written in JS because of the performance benefits of the threading model?

I thought it was because they could copy chromium.

vlovich123•40m ago
Why do you think they don’t struggle with input latency? Because the non blocking nature built into the browser model is so powerful and you cannot get that with threads.
groundzeros2015•26m ago
I disagree with the premise. I cannot imagine a better latency experience than blocking loop IDEs like VS6.

Which inputs are getting latency? The keyboard? The files?

> the non blocking nature

https://youtu.be/bzkRVzciAZg?si=BuBXxHTgN0OqsAhI

usrnm•23m ago
Are you sure that latency-sensitive parts are written in async JS instead of having a separate UI thread (pool)? I have no idea myself, but without knowing the details it's hard to argue. Note, that browsers themselves, are usually written in languages like C++ or Rust. They run JS, but aren't written in it
ptx•14m ago
Are you sure Java's UI issues are caused by threading and not just Swing being a glitchy pile of junk?

For example, if you don't explicitly call the java.awt.Toolkit.sync() method after updating the UI state (which according to the docs "is useful for animation"), Swing will in my experience introduce seemingly random delays and UI lag because it just doesn't bother sending the UI updates to the window system.

hacker_homie•1h ago
I’m just waiting for them to try co-operative multithreading again.
usrnm•1h ago
The problem comes from trying to sit on both chairs: we want async but want to be able to opt out. This is what causes most of the ugliness, including function colouring. Just look at golang, where everything is async with no way to change it, it's great. It's, probably, not well-suited for things like microcontrollers, where every byte matters, but if you can afford the overhead, it's so much better than Rust async. Before async Rust was an interesting and reasonable language, now it's just a hot mess that makes your eyes bleed for no reason.
vanderZwan•16m ago
> It's, probably, not well-suited for things like microcontrollers, where every byte matters, but if you can afford the overhead, it's so much better than Rust async.

There is one hill I'll die on, as far as programming languages go, which is that more people should study Céu's structured synchronous concurrency model. It specifically was designed to run on microcontrollers: it compiles down to a finite state machine with very little memory overhead (a few bytes per event).

It has some limitations in terms of how its "scheduler" scales when there are many trails activated by the same event, but breaking things up into multiple asynchronous modules would likely alleviate that problem.

I'm certain a language that would suppprt the "Globally Asynchronous, Locally Synchronous" (GALS) paradigm could have their cake and eat it too. Meaning something that combines support for a green threading model of choice for async events, with structured local reactivity a la Céu.

F'Santanna, the creator of Céu, actually has been chipping away at a new programming language called Atmos that does support the GALS paradigm. However, it's a research language that compiles to Lua 5.4. So it won't really compete with the low-level programming languages there.

[0] https://ceu-lang.org/

[1] https://github.com/atmos-lang/atmos

vlovich123•1h ago
> Regular code was already async. When you need to wait for an async operation, the thread sleeps until ready and the kernel abstracts it away

Not really. I’ve observed async code often is written in such a way that it doesn’t maximize how much concurrency can be expressed (eg instead of writing “here’s N I/O operations to do them all concurrently” it’s “for operation X, await process(x)”). However, in a threaded world this concurrency problem gets worse because you have no way to optimize towards such concurrency - threads are inherently and inescapably too heavy weight to express concurrency in an efficient way.

This is is not a new lesson - work stealing executors have long been known to offer significantly lower latency with more consistent P99 than traditional threads. This has been known since forever - in the early 00s this is why Apple developed GCD. Threads simply don’t provide any richer information it needs in the scheduler to the kernel about the workload and kernel threads are an insanely heavy mechanism for achieving fine grained concurrency and even worse when this concurrency is I/O or a mixed workload instead of pure compute that’s embarrassingly easily to parallelize.

Do all programs need this level of performance? No, probably not. But it is significantly more trivial to achieve a higher performance bar and in practice achieve a latency and throughput level that traditional approaches can’t match with the same level of effort.

You can tell async is directionally kind of correct in that io_uring is the kernel’s approach to high performance I/O and it looks nothing like traditional threading and syscalls and completion looks a lot closer to async concurrency (although granted exploiting it fully is much harder in an async world because async/await is an insufficient number of colors to express how async tasks interrelate)

groundzeros2015•54m ago
I am not saying threads are the model for all programming problems. For example a dependency graph like an excel spreadsheet can be analyzed and parallelized.

But as you observed, async/await fails to express concurrency any better. It’s also a thread, it’s just a worse implementation.

vlovich123•46m ago
That’s incorrect. Even when expressed suboptimally, it still tends to result in overall higher throughput and consistently lower latency (work stealing executors specifically). And when you’re in this world, you can always do an optimization pass to better express the concurrency. If you’ve not written it async to start with, then you’re boned and have no easy escape hatch to optimize with.
groundzeros2015•42m ago
Why can’t you do the same optimization? Are you maxing out you OS system resources on thread overhead?
BlackFly•43m ago
I think that callbacks are actually easier to reason about:

When it comes time to test your concurrent processing, to ensure you handle race conditions properly, that is much easier with callbacks because you can control their scheduling. Since each callback represents a discrete unit, you see which events can be reordered. This enables you to more easily consider all the different orderings.

Instead with threads it is easy to just ignore the orderings and not think about this complexity happening in a different thread and when it can influence the current thread. It isn't simpler, it is simplistic. Moreover, you cannot really change the scheduling and test the concurrent scenarios without introducing artificial barriers to stall the threads or stubbing the I/O so you can pass in a mock that you will then instrument with a callback to control the ordering...

The problem with callbacks is that the call stack when captured isn't the logical callstack unless you are in one of the few libraries/runtimes that put in the work to make the call stacks make sense. Otherwise you need good error definitions.

You can of course mix the paradigms and have the worst of both worlds.

groundzeros2015•41m ago
I agree. I don’t think callbacks are an underbaked language feature.
codedokode•41m ago
As I understand, "green threads" are also expensive, for example you either need to allocate a large stack for each "thread", or hook stack allocation to grow the stack dynamically (like Go does), and if you grow the stack, you might have to move it and cannot have pointers to stack objects.
slopinthebag•1h ago
It's so funny that people will do anything to hate on Rust, including nitpicking a few bytes of overhead for a future while they reach for an entire thread or runtime to handle async in their favourite language.
rnijveld•1h ago
You realize this article talks about Rust on embedded hardware specifically, where you don’t have threads or big runtimes? There is no hate going on here either, just attempts to make things better. Might I suggest you click through to the homepage and I think you’ll figure out the rest.
MrBuddyCasino•58m ago
Nobody seriously tries to run Golang or Java on an MCU. But they do run Rust code.
lionkor•39m ago
It's more that I and people I know love Rust, and enjoy it, and want it to be better. I want it to be relentlessly optimized.
berkes•38m ago
I know the people and the company behind this article. They do anything but "hate on Rust".

You could've deduced that from the fact that someone who puts this amount of energy in a detailed article about intricacies of an area of "foo", quite certainly does not "hate on foo".

slopinthebag•36m ago
Not the article, the comments here man.

The article is fine besides the bait title.

gspr•22m ago
I _love_ Rust and use it whenever I can. I still find the comments in here to be quite appropriate. Async Rust leaves me with a (subjective!) feeling that something isn't quite right. Not that I know how it _should_ be, but that feeling is very different from the non-async parts of the language that almost always leaves me with a warm fuzzy feeling of joy.

I don't know enough about the domain to be objectively helpful, so it's all wishy-washy feelings on my part. I keep reaching for orchestrating things with threads in Rust where most people would probably reach for async these days. The only language where I've felt fine embracing the blessed async system is Haskell and its green threads (which I understand come with their own host of problems).

hacker_homie•1h ago
This is the type of ugly but necessary discussions that have been happening in c++ for a while.

I never really liked the viral nature of async in rust when it was introduced.

I wish rust the best of luck and with more people like this rust could have a brighter future.

conaclos•57m ago
I recently started working with Rust async. The main issue I am currently facing is code duplication: I have to duplicate every function that I want to support both asynchronous and blocking APIs. This could be great to have a `maybe-async`. I took a look at the available crates to work around this (maybe-async, bisync), but they all have issues or hard limitations.
albertzeyer•53m ago
The classic function coloring problem. https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
small_scombrus•41m ago
It'll depend immensely on what you're actually doing, but if it's simple enough you may be able to make a macro that subs out the types & awaits
forrestthewoods•56m ago
Love Rust. They simply missed the mark with async. Swing and a miss.

The risk they took was very calculated. Unfortunately they’re bad at math and chose the wrong trade-offs.

Ah well. Shit happens.

lionkor•40m ago
I think Rust has a pretty solid async implementation, compared to other systems languages. I struggle to point out another systems language with a working and actually used async implementation.
swiftcoder•39m ago
> Unfortunately they’re bad at math and chose the wrong trade-offs

They chose the exact same tradeoffs as C++'s async/await (and the same overall model as Python/NodeJS), so I'm not sure what that says about programming as a whole.

nromiun•29m ago
Async in Rust and C++ is nothing like it is in Python or NodeJS. Choose your own runtime is a very different model than having a default one.

Not to mention Tokio (most popular runtime for Rust) is multi-threaded by default. So you have to deal with multithreading bugs as well as normal async ones. That is not the case with most async languages. For example both Python and NodeJS use a single thread to execute async code.

ozgrakkurt•34m ago
Does this kind of thing make noticeable difference when applied to more complicated async functions?

Examples in the blog seem too simple make any conclusions

diondokter•27m ago
Hi, author here. I mention in the blog that I've tried to quickly hack two of the simplest optimizations in the compiler and it resulted in 2%-5% binary size savings in real embedded (async) codebases. And a quick and probably deeply flawed synthetic benchmark on the desktop showed a 3% perf increase.

So yes, it does really matter. Keep in mind that optimizations stack. We're preventing LLVM from doing it's thing. So if we make the futures themselves smaller, LLVM will be able to optimize more. So small changes really compound.

DeathArrow•16m ago
I like it more how Zig is approaching async with the new IO. It avoids function coloring.
InfinityByTen•9m ago
I like this article already because it took me to the goals of Rust for 2026. We use the language in our team, but we haven't needed to go very deep to do the stuff we need. Yet, I really enjoy witnessing the development of a language from ground up with so much community feedback.

I somehow miss noticing that in C++ and I have no idea how it is working in other domains.

My only gripe is that a lot of it is feeling a bit kick-starter-y, with each of the goals needing specific funding. Is that the best model we've found so far?