frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Garbage collection for Rust: The finalizer frontier

https://soft-dev.org/pubs/html/hughes_tratt__garbage_collection_for_rust_the_finalizer_frontier/
134•ltratt•1d ago

Comments

king_terry•1d ago
The whole point of Rust is to not have a garbage collector while not worrying about memory leaks, though.
GolDDranks•1d ago
The mechanisms that Rust provide for memory management are various. Having a GC as a library for usecases with shared ownership / handles-to-resources is not out of question. The problem is that they have been hard to integrate with the language.
jvanderbot•1d ago
While you're of course correct, there's just something that feels off. I'd love if we kept niche, focused-purpose languages once in a while, instead of having every language do everything. If you prioritize everything you prioritize nothing.
GolDDranks•1d ago
I agree specifically with regards to GC; I think that focusing on being an excellent language for low-level programming (linkable language-agnostic libraries, embedded systems, performance-sensitive systems, high-assurance systems etc.) should continue being the focus.

However, this is 3rd party research. Let all flowers bloom!

sebastianconcpt•1d ago
> If you prioritize everything you prioritize nothing

Well...

If you prioritize everything you prioritize generalism.

(the "nothing" part comes from our natural limitation to pay enough multidisciplinary attention to details but that psychological impression is being nuked with AI as we speak and the efforts to get to AGI are an attempt to make synthetic "intelligence" be able to gain dominion over this spirit before us)

virgilp•1d ago
No, they are quite identical. Both cases logically lead to "now everything has the same priority". There's nothing about generalism in there.
bryanlarsen•23h ago
Just like when hiring developers, there's an advantage in choosing "jack of all trades, master of some".
IainIreland•23h ago
One clear use case for GC in Rust is for implementing other languages (eg writing a JS engine). When people ask why SpiderMonkey hasn't been rewritten in Rust, one of the main technical blockers I generally bring up is that safe, ergonomic, performant GC in Rust still appears to be a major research project. ("It would be a whole lot of work" is another, less technical problem.)

For a variety of reasons I don't think this particular approach is a good fit for a JS engine, but it's still very good to see people chipping away at the design space.

quotemstr•21h ago
Would you plug Boehm GC into a first class JS engine? No? Then you're not using this to implement JS in anything approaching a reasonable manner either.
zorgmonkey•20h ago
It looks like the API of Alloy was at least designed in such a way that can somewhat easily change the GC implementation out down the line and I really hope they do cause Boehm GC and conservative GC in general is much too slow compared to state of the art precise GCs.
quotemstr•20h ago
It's not an implementation thing. It's fundamental. A GC can't move anything it finds in a conservative root. You can build partly precise hybrid GCs (I've built a few) but the mere possibility of conservative roots complicates implementation and limits compaction potential.

If, OTOH, Alloy is handle based, then maybe there's hope. Still a weird choice to use Rust this way.

ltratt•20h ago
We don't exactly want Alloy to have to be conservative, but Rust's semantics allow pointers to be converted to usizes (in safe mode) and back again (in unsafe mode), and this is something code really does. So if we wanted to provide an Rc-like API -- and we found reasonable code really does need it -- there wasn't much choice.

I don't think Rust's design in this regard is ideal, but then again what language is perfect? I designed languages for a long while and made far more, and much more egregious, mistakes! FWIW, I have written up my general thoughts on static integer types, because it's a surprisingly twisty subject for new languages https://tratt.net/laurie/blog/2021/static_integer_types.html

quotemstr•17h ago
> We don't exactly want Alloy to have to be conservative, but Rust's semantics allow pointers to be converted to usizes (in safe mode) and back again (in unsafe mode), and this is something code really does. So if we wanted to provide an Rc-like API -- and we found reasonable code really does need it -- there wasn't much choice.

You can define a set of objects for which this transformation is illegal --- use something like pin projection to enforce it.

ltratt•17h ago
The only way to forbid it would be to forbid creating pointers from `Gc<T>`. That would, for example, preclude a slew of tricks that high performance language VMs need. That's an acceptable trade-off for some, of course, but not all.
quotemstr•17h ago
Not necessarily. It would just require that deriving these pointers be done using an explicit lease that would temporarily defer GC or lock an object in place during one. You'd still be able to escape from the tyranny of conservative scanning everything.
nitwit005•16h ago
Once you are generating and running your own machine code, isn't the safety of Rust generally out the window?
hedora•22h ago
Yeah; I wished they'd gone the other way, and made memory leaks unsafe (yes, this means no Rc or Arc). That way, you could pass references across async boundaries without causing the borrow checker to spuriously error out.

(It's safe to leak a promise, so there's no way for the borrow checker to prove an async function actually returned before control flow is handed back to the caller.)

dzaima•22h ago
Same as with GC, neither need be a fixed choice; having a GC library/feature in Rust wouldn't mean that everything will be and must be GC'd; and it's still possible to add unleakable types were it desired: https://rust-lang.github.io/keyword-generics-initiative/eval... while keeping neat things like Rc<T> available for things that don't care. (things get more messy when considering defaults and composability with existing libraries, but I'd say that such issues shouldn't prevent the existence of the options themselves)
Manishearth•22h ago
Worth highlighting: library-level GC would not be convenient enough to use pervasively in Rust anyway. library-level GC does not replace Rust's "point".

It's useful to have when you have complex graph structures. Or when implementing language runtimes. I've written a bit about these types of use cases in https://manishearth.github.io/blog/2021/04/05/a-tour-of-safe...

And there's a huge benefit in being able to narrowly use a GC. GCs can be useful in gamedev, but it's a terrible tradeoff to need to use a GC'd language to get them, because then everything is GCd. library-level GC lets you GC the handful of things that need to be GCd, while the bulk of your program uses normal, efficient memory management.

zorgmonkey•20h ago
This is a very important point, careful use of GCs for a special subset of allocations that say have tricky lifetimes for some reason and aren't performance critical could have a much smaller impact on overall application performance than people might otherwise expect.
Manishearth•9h ago
Yeah, and it's even better if you have a GC where you can control when the collection phase happens.

E.g. in a game you can force collection to run between frames, potentially even picking which frames it runs on based on how much time you have. I don't know if that's a good strategy, but it's an example of the type of thing you can do.

James_K•22h ago
Actually, memory leaks are the major class of memory error for which Rust offers no protection. See the following safe function in Box:

https://doc.rust-lang.org/std/boxed/struct.Box.html#method.l...

gizmo686•21h ago
Rust's memory safety guarantees do not insure the absence of leaks. However, Rust's design does offer significant protection against leaks (relative to languages like C where all heap allocations must be explicitly freed).

The fact that any felt it nessasary to add a "leak" function to the standard library should tell you something about how easy it is to accidentally leak memory.

umanwizard•20h ago
Not really, modern C++ already makes it about as hard to leak memory as it is in Rust.

Rust has loads of other advantages over C++, though.

ape4•1d ago
If only there was a C++-like language with garbage collection (Java, C#, etc)
reactordev•23h ago
Latest version of C# is a fantastic choice for this. Java too but I would lean more C# due to the new delegate function pointers for source-generated p/invoke. Thing of beauty.
reactordev•19h ago
I also want to call out CppAst [0] and CppAst.CodeGen [1] projects. These two things have saved me years of my life if I were to roll these by hand. Kudos Alexandre Mutel, kudos.

[0] https://github.com/xoofx/CppAst.NET

[1] https://github.com/xoofx/CppAst.CodeGen

jerf•23h ago
I can't be an expert in every GC implementation because there are so many of them, but many of the problems they mention are problems in those languages too. Finalizers are highly desirable to both the authors of the runtimes and the users of the languages, but are generally fundamentally flawed in GC'd languages, to the point that the advice in those languages is to stay away from them unless you really know what you are doing, and then, to stay away from them even so if you have any choice whatsoever... and that's the "solution" to these problems most languages end up going with.

Which does at least generally work. It's pretty rare to be bitten by these problems, like, less-than-once-per-career levels of rare (if you honor the advice above)... but certainly not unheard of, definitely not a zero base rate.

gwbas1c•1d ago
Before making criticisms that Garbage Collection "defeats the point" of Rust, it's important to consider that Rust has many other strengths:

- Rust has no overhead from a "framework"

- Rust programs start up quickly

- The rust ecosystem makes it very easy to compile a command-line tool without lots of fluff

- The strict nature of the language helps guide the programmer to write bug-free code.

In short: There's a lot of good reasons to choose Rust that have little to do with the presence or absence of a garbage collector.

I think having a working garbage collection at the application layer is very useful; even if it, at a minimum, makes Rust easier to learn. I do worry about 3rd party libraries using garbage collectors, because they (garbage collectors) tend to impose a lot of requirements, which is why a garbage collector usually is tightly integrated into the language.

jvanderbot•1d ago
You've just listed "Compiled language" features. Only the 4th point has any specificity to Rust, and even then, is vague in a way that could be misinterpreted.

Rust's predominant feature, the one that brings most of its safety and runtime guarantees, is borrow checking. There are things I love about Rust besides that, but the safety from borrow checking (and everything the borrow checker makes me do) is why I like programming in rust. Now, when I program elsewhere, I'm constantly checking ownership "in my head", which I think is a good thing.

zamalek•1d ago
- Rust is a nice language to use
gwbas1c•23h ago
Oh no, I'm directly criticizing C/C++/Java/C#:

The heavyweight framework (and startup cost) that comes with Java and C# makes them challenging for widely-adopted lightweight command-line tools. (Although I love C# as a language, I find the Rust toolchain much simpler and easier to work with than modern dotnet.)

Building C (and C++) is often a nightmare.

hypeatei•23h ago
> The heavyweight framework

Do you mean the VM/runtime? If so, you might be able to eliminate that with an AOT build.

> I find the Rust toolchain much simpler and easier to work with than modern dotnet

What part of the toolchain? I find them pretty similar with the only difference being the way you install them (with dotnet coming from a distro package and Rust from rustup)

jillesvangurp•19h ago
Exactly, natively compiled garbage collected languages (like Java with Graal; or as executed on Android) don't have a lot of startup overhead. In Java the startup overhead is mostly two things that usually conspire to make things worse:

1) dynamic loading of jar files

2) reflection

Number 1 allows you to load arbitrary jar files with code and execute them. Number 2 allows you to programmatically introspect existing code and then execute logic like "Find me all Foo sub classes and create an instance of those and return the list of those objects". You can do that at any time but a lot of that kind of stuff happens at startup. That involves parsing, loading and introspecting thousands of class files in jar files that need to be opened and decompressed.

Most of "Java is slow" is basically programs loading a lot of stuff at startup, and then using reflection to look for code to execute. You don't have to do those things. But a lot of popular web frameworks like Spring do. A lot of that stuff is actually remarkably quick considering what it is doing. You'd struggle to do this in many other languages. Or at all because many languages don't have reflection. If you profile it, there are millions of calls happening in the first couple of seconds. It's taking time yes. But that code has also been heavily optimized over the years. Dismissing what that does as "java is slow" and X is fast is usually a bit of an apples and oranges discussion.

With Spring Boot, there are dozens of libraries that self initialize if you simply add the dependency or the right configuration to your project. We can argue about whether that's nice or not; I'm leaning to no. But it's a neat feature. I'm more into lighter weight frameworks these days. Ktor server is pretty nice, for example. It starts pretty quickly because it doesn't do a whole lot on startup.

Loading a tiny garbage collector library on startup isn't a big deal. It will add a few microseconds to your startup time maybe. Probably not milliseconds. Kotlin has a nice native compiler. If you compile hello world with it it's a few hundred kilobytes for a self contained binary with the program, runtime, and the garbage collection. It's not a great garbage collector. For memory intensive stuff you are better off using the JVM. But if that's not a concern, it will do the job.

mk89•12h ago
You forgot to mention Quarkus :)
procaryote•22h ago
Hello world in java is pretty fast. Not rust fast but a lot faster than you'd expect.

Java starting slowly is mostly from all the cruft in the typical java app, with springboot, dependency injection frameworks, registries etc. You don't have to have those, it's just that most java devs use them and can't conceive of a world of low dependencies

Still not great for commandline apps, but java itself is much better than java devs

kbolino•22h ago
Java's biggest weakness in this area is its lack of value types. It's well known, Project Valhalla has been trying to fix it for years, but the JVM just wasn't built around such types and it's hard to bolt them on after the fact. Java's next biggest weakness (which will become more evident with value types) is its type-erased generics. Both of these problems lead to time wasted on unnecessary GC, and though they can be worked around with arrays and codegen, it's unwieldy to say the least.
pron•21h ago
Project Valhalla will also specialise generics for value types. When you say, "it's hard to bolt on", the challenge isn't technical, but how to do this in a way that adds minimal language complexity (i.e. less than in other languages with explicit "boxed" and "inlined" values). Ideally, this should be done in a way that tells the compiler know which types can be inlined (e.g. they don't require identity) and then letting the compiler decide when it wants to actually inline an instance as a transparent optimisation. The challenge would not have been any smaller had Java done this from the beginning.
kbolino•21h ago
Maybe I picked the wrong wording--I don't mean to diminish the ambitions or scope of Valhalla--but I definitely think the decision to eschew value types at the start has immense bearing on the difficulty of adding them now.

Java's major competitors, C# and Go, both have had value types since day one and reified generics since they gained generics; this hasn't posed any major problems to either language (with the former being IMO already more complex than Java, but the latter being similarly or even less complex than Java).

If the technical side isn't that hard, I'd have expected the JVM to have implemented value types already, making it available to other less conservative languages like Kotlin, while work on smoothly integrating it in Java took as long as needed. Project Valhalla is over a decade old, and it still hasn't delivered, or even seems close to delivering, its primary goals yet.

Just to be clear, I don't think every language needs to meet every need. The lack of value types is not a critical flaw of Java in general, as it only really matters when trying to use Java for certain purposes. After all, C# is very well suited to this niche; Java doesn't have to fit in it too.

pron•19h ago
> Java's major competitors, C# and Go, both have had value types since day one

Yes (well, structs; not really value types), but at a significant cost to FFI and/or GC and/or user-mode threads (due to pointers into the stack and/or middle of objects). Java would not have implemented value types in this way, and doing it the way we want to would have been equally tricky had it been done in Java 1.0. Reified generics also come at a high price, that of baking the language's variance strategy into the ABI (or VM, if you want). However, value types will be invariant (or possibly extensible in some different way), so it would be possible to specialise generics for them without necessarily baking the Java language's variance model into the JVM (as the C# variance model is baked into the CLR).

Also, C# and Go didn't have as much of a choice, as their optimising compilers and GCs aren't as sophisticated as Java's (e.g. Java doesn't actually allocate every `new` object on the heap). Java has long tried to keep the language as simple as possible, and have very advanced compilers and GCs.

> If the technical side isn't that hard, I'd have expected the JVM to have implemented value types already, making it available to other less conservative languages like Kotlin, while work on smoothly integrating it in Java took as long as needed

First, that's not how we do things. Users of all alternative Java Platform languages (aka alternative JVM languages) combined make up less than 10% of all Java platform users. We work on the language, VM, and standard library all together (this isn't the approach taken by .NET, BTW). We did deliver invokedynamic before it was used by the Java language, but 1. that was after we knew how the language would use it, and 2. that was at a time when the JDK's release model was much less flexible.

Second, even if we wanted to work in this way, it wouldn't have mattered here. Other Java Platform languages don't just use the JVM. They make extensive use of the standard library and observability tooling. Until those are modified to account for value types, just a JVM change would be of little use to those languages. The JVM comprises maybe 25% of the JDK, while Kotlin, for example, makes use of over 95% of the JDK.

Anyway, Project Valhalla has taken a very long time, but it's making good progress, and we hope to deliver some of its pieces soon enough.

pjmlp•6h ago
Go I agree, .NET is on par with JVM, even if they don't have the pleothora of choice regarding JVM implementations, and the ability to do C++ like coding means there isn't that much of a pressure for pauseless GC as in Java.

Looking forward to Project Valhalla updates, I had some fun with the first EA.

pron•2h ago
I'm not sure what is meant here by "on par with the JVM." I'm not trying to claim that one or the other is better, but there is a basic difference in how they're designed and continue to evolve. .NET believes in a language that gives more control on top of a more basic runtime, while Java believes in a language that's smaller built on top of a more advanced runtime. They just make different tradeoffs. .NET doesn't "need" a more advanced runtime because limitations in its runtime can be overcome by more explicit control in the language; Java doesn't "need" a more elaborate language because limitations in the level of control offered by the language can be overcome by a more sophisticated runtime.

I'm not saying these are huge differences, but they're real. C# has more features than the Java language, while Java's compiler and GCs are more sophisticated than the CLR's. Both of these differences are due to conscious choices made by both teams, and they each have their pros and cons. I think these differences are very apparent in how these two platforms tackled high-scale concurrency: .NET did it in the language; Java did it in the runtime. When it comes to value types, we see a similar difference: in C# you have classes and structs (with autoboxing); in Java we'll just have classes that declare whether they care about identity, and the runtime will then choose how to represent each instance in memory (earlier designs did explore "structs with autoboxing" but things have moved on from there, to the point of redefining autoboxing even for Java primitives; for a type that doesn't care about identity, autoboxing becomes an implementation detail - transparently made by the compiler - with no semantic difference, as a pointer or a value cannot be distinguidhed in such a case - hence https://openjdk.org/jeps/390 - unlike before, when an Integer instance could be distinguished from an int).

pjmlp•2h ago
It means that it does the same JIT optimization tricks that Hotspot performs, escape analysis, devirtualization, inlining method calls, removing marshaling layers when calling into native code, PGO feedback,....

I would like to someday have someone write blog posts about performance like the famous ones from the .NET team, and also not having to depend on something external like JIT Watch, instead of having it in box like .NET.

Example for upcoming .NET 10,

https://devblogs.microsoft.com/dotnet/performance-improvemen...

Also C# and .NET low level programming features are here today, Project Valhala delivery is still in future, to be done across several versions, assuming that Oracle's management doesn't lose interest funding the effort after all these years.

It is kind of interesting how after all these years, the solution is going to be similar in spirit to what Eiffel expanded types were already offering in 1986.

https://wiki.liberty-eiffel.org/index.php/Expanded_or_refere...

https://archive.eiffel.com/doc/online/eiffel50/intro/languag...

I guess that is what happens when language adoption turns out to go in a different path than originally planned, given Java's origins.

pjmlp•17h ago
Currently it takes lots of boilerplate code, however with Project Panama API you can model C types in memory, thus kind of already using value types even if Valhala isn't yet here.

To avoid manually writing all the Panama boilerplate, you can instead write a C header file with the desired types, and then run jextract through it.

gizmo686•22h ago
Testing on my machine, Hello World in java (openjdk 21) takes about 30ms.

In contrast, "time" reports that rust takes 1ms, which is the limit of it's precision.

Python does Hello World in just 8ms, despite not having a separate AOT compilation step.

The general guidance I've seen for interaction is that things start to feel laggy at 100ms; so 30ms isn't a dealbreaker, but throwing a third of your time budget at the baseline runtime cost is a pretty steep ask.

If you want to use the application as a short lived component in a larger system, than 30ms on every invocation can be a massive cost.

zigzag312•21h ago
App that actually does something will probably have even larger startup overhead in Java as there will be more to compile just-in-time.
pjmlp•18h ago
Only when not using either AOT or JIT cache.
0cf8612b2e1e•19h ago
I recall that Mercurial was really fighting their Python test harness. It essentially would startup a new Python process for each test. At 10ms per, it added up to something significant, given their volume of work to cover something as complicated as SCM.
typpilol•9h ago
10ms?

Did they have like 100k tests?

guelo•19h ago
I'm trying and failing to imagine a situation where 30ms startup time would be a problem. Maybe some kind of network service that needs to execute a separate process on every request?
tacticus•16h ago
30ms is the absolute best case. Throw some spring in there and you're very quickly at 10s. rub some spring-soap and it's near enough to 60s
ori_b•15h ago
And imagine if you start adding sleep calls! Those could take minutes to hours, or even days!
ykonstant•6h ago
New HN submission: How I Made My Sleep Function Accidentally Quadratic.
davemp•14h ago
30ms is pretty close to noticeable for anything that responds to user input. 30ms startup + 20-70ms processing would probably bump you into the noticeable latency range.
yeasku•2h ago
People play midi keyboards with 30 ms latency.
pixelpoet•10h ago
It's not about how long someone is willing to wait with a timer and judge it on human timescales, it's about what is an appropriate length of time for the task.

30ms for a program to start, print hello world, and terminate on a modern computer is batshit insane, and it's crazy how many programmers have completely lost sight of even the principle of this.

yeasku•2h ago
Java is a tool, a very good one.
ComputerGuru•22h ago
New AOT C# is nice, but not fully doable with the most common dependencies. It addresses a lot of the old issues (size, bloat, startup latency, etc)
jiggawatts•12h ago
Hilariously, the Microsoft SQL Client is the primary blocker for AOT for most potential usecases.

Want fast startup for an Azure Function talking to Azure SQL Database? Hah… no.

In all seriousness, that one dependency is the chain around the ankle of modern .NET because it’s not even fully async capable! It’s had critical performance regression bugs open for years.

Microsoft’s best engineers are busy partying in the AI pool and forgot about drudgery like “make the basics components work”.

quotemstr•21h ago
Heavyweight startup? What are you talking about? A Graal-compiled Java binary starts in a few milliseconds. Great example of how people don't update prejudices for decades.
pjmlp•18h ago
Only for those that don't know how to use AOT compilation tools for Java and C#.
jraph•17h ago
GraalVM indeed do wonders wrt startup times and in providing a single binary you can call.
pjmlp•9h ago
Open J9 as well.

Then there are all the others that used to be commercial like ExcelsiorJET, or surviving ones like PTC and Aicas.

paulddraper•9h ago
Compiling Java AOT doesn’t obviate the need for the JVM.

At least not for Graal.

https://stackoverflow.com/questions/75316542/why-do-i-need-j...

gudzpoz•8h ago
... That post you linked was from two years ago, discussing JEP 295, which was delivered eight years ago. Graal-based AOT has evolved a lot ever since. And the answer even explicitly recommended using native images:

> I think what you actually want to do, is to compile a native image of your program. This would include all the implications like garbage collection from the JVM into the executable.

And it is this "native image" that all the comments above in this thread have been discussing, not JEP 295. (And Graal-based AOT in native images does remove the need to bundle a whole JRE.)

pjmlp•6h ago
Because it is user problem, instead of compiling with native image, they produced a shared library out of the Jar.

As you can see, it has nothing to do with that Stack Overflow question,

https://www.graalvm.org/jdk25/reference-manual/native-image/

jrop•23h ago
Just going to jump in here and say that there's another reason I might want Rust with a Garbage Collector: The language/type-system/LSP is really nice to work with. There have indeed been times that I really miss having enums + traits, but DON'T miss the borrow checker.
tuveson•23h ago
Maybe try a different ML-influenced language like OCaml or Scala. The main innovation of Rust is bringing a nice ML-style type system to a more low level language.
Yoric•22h ago
Jane Street apparently has a version of OCaml extended with affine types. I'd like to test that, because that would (almost) be the best of all worlds.
nobleach•18h ago
I think you're referring to OxCaml. I'd love to see this make a huge splash. Right now one of the biggest shortcomings of OCaml, is one is still stuck implementing so much stuff from scratch. Languages like Rust, Go and Java have HUGE ecosystems. OCaml is just as old (even older than Rust since OCaml inspired Rust and its original compiler was written in OCaml) as these languages. Since it's not been as popular, it's hard to find well-supported libraries.
debugnik•3h ago
I too wish that some OxCaml features bring new blood to OCaml. I've been using OCaml for a few years for personal projects and I find the language really simple and powerful at the same time, but I had to implement me some foundational libraries (e.g. proper JSON, parser combinators), and now I'm considering porting one of those projects to Rust just so I can have unboxed types and better Windows support.

> even older than Rust

That's an understatement, (O)Caml is between 17 and 25 years older than Rust 0.1 depending on which Caml implementation you start counting from.

umanwizard•20h ago
There are other nice things about Rust over OCaml that are mainly just due to its popularity. There are libraries for everything, the ecosystem is polished, you can find answers to any question easily, etc. I don't think the same can be said for OCaml, or at least not to the same extent. It's still a fairly niche language compared to Rust.
nobleach•18h ago
I remember about 5 years ago, StackOverflow for OCaml was a nightmare. It was a mishmash of Core (from Jane Street) Batteries, and raw OCaml. New developers were confronted with the prospect of opening multiple libraries with the same functionality. (not the correct way of solving any problem)
IshKebab•7h ago
I wouldn't recommend OCaml unless you plan to never support Windows. It finally does support it in OCaml 5 but it's still based around cygwin which totally sucks balls.

Also the OCaml community is miniscule compared to Rust. And the syntax is pretty bonkers in places, whereas Rust is mostly sane.

Compile time is pretty great though. And the IDE support is also pretty good.

tayo42•23h ago
What other language has modern features like rust and is compiled?
procaryote•22h ago
it depends completely on what you put in "modern features"
tayo42•22h ago
Pattern matching, usable abstractions, non null types, tagged unions or w/e enums are, build tools etc
munificent•21h ago
I'm not sure what you mean by "usable abstractions" and tagged unions are a little verbose because they are defined in terms of closed sets of subtypes, but otherwise Dart has all of those.
tayo42•20h ago
Nothing like "oh you can do that but with this weird work around" or if they're clunky to use
procaryote•20h ago
This sounds more like "this is what I like in rust" than "features any modern language should have" though

If you like rust, use rust. It's very likely the best rust

lmm•13h ago
> This sounds more like "this is what I like in rust" than "features any modern language should have" though

Good build tooling has been around since 2004, and all of the rest of those features have been around since the late 1970s. There's really no excuse for a language not having all of them.

antonvs•13h ago
That’s definitely a list of features that any modern language should have. It’s in no way specific to Rust.
pjmlp•17h ago
Standard ML from 1983, alongside all those influenced by it like Haskell, OCaml, Agda, Rocq,....
lmm•13h ago
Most of those have nothing remotely approaching Rust's level of build tooling.
pjmlp•9h ago
Yet parent was mostly talking about type systems.

If you prefer, Rust tooling is still quite far behind from languages like Kotlin and Scala, which I didn't mention, but also have such type system.

lmm•9h ago
> If you prefer, Rust tooling is still quite far behind from languages like Kotlin and Scala

I'm not sure that's true, at least when it comes to specifically build tooling. I'd say Cargo is far ahead of Gradle, Ant, or worst of all SBT, and probably even slightly ahead of Maven (which never really reached critical mass in the Kotlin or Scala ecosystems sadly).

pjmlp•8h ago
You are missing the IDE capabilities, maturity of GUI frameworks, a full OS that 80% of the world uses,... the whole tooling package.
lmm•7h ago
"build tools" does not normally refer to those things.
pjmlp•7h ago
It does for me, using IDEs with Borland languages for MS-DOS.

It is about the whole package.

strobe•14h ago
Scala but it's on JVM (also is https://scala-native.org without JVM but that not really has big user base)
cultofmetatron•20h ago
nim, zig and ocaml come to mind
gizmo686•1d ago
Also, the proposed garbage collector is still opt in. Only pointers that are specifically marked as GC are garbage collected. This means that most references are still cleaned up automatically when the owner goes out of scope. This greatly reduces the cost of GC compared to making all heap allocations garbage collected.

This isn't even a new concept in Rust. Rust already has a well accepted RC<T> type for reference counted pointers. From a usage perspective, GC<T> seems to fit in the same pattern.

zigzag312•21h ago
Language where most of the libraries are without GC, but has an GC opt in would be interesting. For example only your business logic code would use GC (so you can write it more quickly). And parts where you don't want GC are still written in the same language, avoiding the complexity of FFI.

Add opt-in development compilation JIT for quick iteration and you don't need any other language. (Except for user scripts where needed.)

yoyohello13•22h ago
I love the rust ecosystem, syntax, and type system. Being able to write Rust without worrying about ownership/lifetimes sounds great honestly.
rixed•22h ago
In all honesty, there are three topics I try to refrain myself from engaging with on HN, often unsuccesfully: politics, religion, and rust.

I don't know what you had to go through before reaching rust's secure haven, but what you just said is true for the vast majority of compiled languages, which are legions.

quotemstr•21h ago
It's the fledging of a new generation of developers. Every time I see one of these threads I tell myself, "you, too, were once this ignorant and obnoxious". I don't know any cute except letting them get it out of their system and holding my nose as they do.
Ar-Curunir•11h ago
Well you might find it good to learn that Rust is based on plenty of ideas dating back decades, so _your_ obnoxious and patronizing attitude is unwarranted.
quotemstr•9h ago
Rust gets some things right and some things wrong. Its designers are generally clueful, but like all humans, fallible. But what does this discussion have to do with Rust exactly? Exactly the same considerations would apply to a C++ GC.

The only thing more cringe than insisting on a GC strategy without understanding the landscape is to interpret everything as an attack on one's favored language.

bregma•20h ago
> politics, religion, and rust

Is there a real distinction between any of those?

James_K•22h ago
Go is probably a better pick in this case.
throwaway127482•10h ago
With data intensive Go applications you eventually hit a point where your code has performance bottlenecks that you cannot fix without either requiring insane levels of knowledge on how Go works under the hood, or using CGo and incurring a high cost for each CGo call (last I heard it was something like 90ns), at which point you find yourself regretting you didn't write the program in Rust. If GC in Rust could be made ergonomic enough, I think it could be a better default choice than Go for writing a compiled app with high velocity. You could start off with an ergonomic GC style of Rust, then later drop into manual mode wherever you need performance.
ViewTrick1002•7h ago
Inviting in nil errors, data races and a near non-existent type system.
victorbjorklund•21h ago
Also assuming one can mix garbage collection with the borrower (is that what its called in rust?) one should be able to use GC for things that arent called that much / that important and use the normal way for things that benefit from no GC interupts etc
imtringued•20h ago
The problem with conventional garbage collection has very little to do with the principle or algorithms behind garbage collection and more to do with the fact that seemingly every implementation has decided to only support a single heap. The moment you can have isolated heaps almost every single problem associated with garbage collection fades away. The only thing that remains is that cleaning up memory as late as possible is going to consume more memory than doing it as early as possible.
tuveson•19h ago
What problem does that solve with GC, specifically? It also seems like that creates an obvious new problem: If you have multiple heaps, how do you deal with an object in heap A pointing to an object in heap B? What about cyclic dependencies between the two?

If you ban doing that, then you’re basically back to manual memory management.

paulddraper•9h ago
There’s a ton of work that goes into multi-generational management, incremental vs stop the world, frequency heuristics, etc.

A lot of the challenge is there is not just one universal answer for these, the optimum strategies vary case by case.

You are correct that each memory arena is the boundary of the GC. Any GC between them must be handled manually.

grogers•17h ago
BEAM (i.e. erlang) is exactly that model, every lightweight process has its own heap. I don't see how you'd make that work in a more general environment that supports sharing pointers across threads.
fithisux•19h ago
I really like your work
jadenPete•18h ago
Rust's choice of constructs also makes writing safe and performant code easy. Many other compiled languages lack proper sum and product types, and traits (type classes) offer polymorphism without many of the pitfalls of inheritance, to name a few.
drnick1•12h ago
Aren't Rust programs still considerably larger than their C equivalent because everything is statically linked? It's kind of hard to see that as an advantage.
paulddraper•9h ago
No.

They may be larger because they are doing more work, depends on the program.

But no they don’t statically compile everything.

IshKebab•6h ago
You can get Rust binaries pretty small: https://github.com/johnthagen/min-sized-rust

But in practice it's more like there's an overhead for "hello world" but it's a fixed overhead. So it's really only a problem where you have lots of binaries, e.g. for coreutils. The solution there is a multi-call binary like Busybox that switches on argv[0].

C programs often seem small because you don't see the size of their dependencies directly, but they obviously still take up disk space. In some cases they can be shared but actually the amount of disk space this saves is not very big except for things like libc (which Rust dynamically links) and maybe big libraries like Qt, GTK, X11.

torginus•23h ago
While I'm not ideologically opposed to GC in Rust I have to note:

- the syntax is hella ugly

- GC needs some compiler machinery, like precise GC root tracking with stack maps, space for tracking visted objects, type infos, read/write barriers etc. I don't know how would you retrofit this into Rust without doing heavy duty brain surgery on the compiler. You can do conservative GC without that, but that's kinda lame.

taylorallred•23h ago
For those who are interested, I think that arena allocation is an underrated approach to managing lifetimes of interconnected objects that works well with borrow checking.
worik•19h ago
> works well with borrow checking.

Yes, because it defeats borrow checking.

Unsafe Rust, used directly, works too

celeritascelery•18h ago
It does not defeat borrow checking. The borrow checker will ensure that objects do not outlive the arena. It works with borrow checking.
Archit3ch•17h ago
This. Arenas don't work when you don't know when it's okay to free. The borrow checker can help with that (or you can track it manually in C/Zig).
worik•11h ago
The borrow checker knows nothing about your arena allocations.

That is if we are talking about the same thing!

All the borrow checker knows is there is a chunk of memory (the arena) in scope.

It works, no memory safety in the sense that you must manage your own garbage and you can reference uninitialized parts of the arena

I have found myself using arenas in Rust for managing circular references (networks with cycles) and if I were to do it again I think I would write that bit in C or unsafe Rust.

ben-schaaf•1h ago
The popular Bumpalo only returns references with the lifetime of the allocator. Not sure what you mean by manage your own garbage, an arena allocator deallocates everything when it goes out of scope. You definitely can't reference uninitialized parts of an arena.
haberman•18h ago
I agree, but in my experience arena allocation in Rust leaves something to be desired. I wrote something about this here: https://blog.reverberate.org/2021/12/19/arenas-and-rust.html

I was previously excited about this project which proposed to support arena allocation in the language in a more fundamental way: https://www.sophiajt.com/search-for-easier-safe-systems-prog...

That effort was focused primarily on learnability and teachability, but it seems like more fundamental arena support could help even for experienced devs if it made patterns like linked lists fundamentally easier to work with.

Dwedit•23h ago
There was one time where I actually had to use object resurrection in a finalizer. It was because the finalizer needed to acquire a lock before running destruction code. If it couldn't acquire the lock, you resurrect the object to give it a second chance to destroy (calling GC.ReRegisterForFinalize)
nu11ptr•22h ago
While it might be useful for exploration/academic pursuit/etc., am I the only one who finds "conservative GC" a non-starter? Even if this was fully production ready, I had a use case for it, etc. I still would never ship an app with a conservative GC. It is difficult enough to remove my own bugs and non-determinism, and I just can't imagine trying to debug a memory leak caused due to a conservative GC not finding all used memory.
ltratt•22h ago
If you've used Chrome or Safari to read this post, you've used a program that uses (at least in parts) conservative GC. [I don't know if Firefox uses conservative GC; it wouldn't surprise me if it does.] This partly reflects shortcomings in our current compilers and in current programming language design: even Rust has some decisions (e.g. pointers can be put in `usize`s) that make it hard to do what would seem at first glance to be the right thing.
astrange•19h ago
Also most mobile games written in C# use a conservative GC (Boehm).
Rohansi•16h ago
Not just mobile games - all games made with Unity.
gwbas1c•21h ago
> Having acknowledged that pointers can be 'disguised' as integers, it is then inevitable that Alloy must be a conservative GC

C# / dotnet don't have this issue. The few times I've needed a raw pointer to an object, first I had to pin it, and then I had to make sure that I kept a live reference to the object while native code had its pointer. This is "easier done than said" because most of the time it's passing strings to native APIs, where the memory isn't retained outside of the function call, and there is always a live reference to the string on the stack.

That being said, because GC (in this implementation) is opt-in, I probably wouldn't mix GC and pointers. It's probably easier to drop the requirement to get a pointer to a GC<T> instead of trying to work around such a narrow use case.

quotemstr•21h ago
Worse, conservatism in a GC further implies it can't be a moving GC, which means you can't compact, use bump pointer allocation, and so on. It keeps you permanently behind the frontier.

I remain bitterly disappointed that so much of the industry is so ignorant of the advances of the past 20 years. It's like it's 1950 and people are still debating whether their cloth and wood airplanes should be biplanes or triplanes.

gwbas1c•19h ago
The thing I don't understand is why anyone would pass a pointer to a GC'ed object into a 3rd party library (that's in a different language) and expect the GC to track the pointer there?

Passing memory into code that uses a different memory manager is always a case where automatic memory management shouldn't be used. IE, when I'm using a 3rd party library in a different language, I don't expect it to know enough about my language's memory model to be able to effectively clean up pointers that I pass to it.

quotemstr•9h ago
> The thing I don't understand is why anyone would pass a pointer to a GC'ed object into a 3rd party library

The promise of GC is to free the programmer from the burden of memory management. If I can't give (perhaps fractional) ownership of a data structure to a library and expect its memory to be reclaimed at the appropriate time, have I freed myself from the burden of memory management?

gwbas1c•4h ago
Think about it this way:

Unless you are using malloc; and/or you don't need to do anything when the pointer is freed, (the pointer doesn't reference anything else that needs to be freed or released,) there's no way that the library written outside of your runtime knows how to free your memory.

Or to put it in a different way: Passing pointers to a native library is a small amount of what your application does and you still benefit from the garbage collector when you are running inside of your own language.

zozbot234•7h ago
You can pass a pointer to a foreign library, but this requires temporarily making the pointee object a GC root because that library code is essentially sharing ownership of it with the GC.
GolDDranks•20h ago
Also, Rust is not going to have it for the long run that pointers can be, in fact, disguised as integers. There is this thing called pointer provenance, and some day, all pointers are required to have provenance (i.e. a proof where they did come from) OR they are required to admit that POOF this is a pointer out of thin air, you can't assume anything about the pointee. As long as there are no POOF magicians, the GC can assume that it knows every reference!
celeritascelery•17h ago
> As long as there are no POOF magicians, the GC can assume that it knows every reference!

creating pointers without provenance is safe, so the GC can’t assume that a program won’t have them also be sound. This always be an issue.

IshKebab•6h ago
I wouldn't say always: https://doc.rust-lang.org/std/ptr/index.html#strict-provenan...

I don't know what the plan is but I wouldn't be surprised if there's a breaking change (maybe in an edition) to remove exposed provenance from Rust entirely.

MereInterest•16h ago
Even their so-called conservative assumption is also insufficient.

> if a machine word's integer value, when considered as a pointer, falls within a GCed block of memory, then that block itself is considered reachable (and is transitively scanned). Since a conservative GC cannot know if a word is really a pointer, or is a random sequence of bits that happens to be the same as a valid pointer, this over-approximates the live set

Suppose I allocate two blocks of memory, convert their pointers to integers, then store the values `x` and `x^y`. At this point, no machine word points to the second allocation, and so the GC would consider the second allocation to be unreachable. However, the value `y` could be computed as `x ^ (x^y)`, converted back to a pointer, and accessed. Therefore, their reachability analysis would under-approximate the live set.

If pointers and integers can be freely converted to each other, then the GC would need to consider not just the integers that currently exist, but also every integer that could be produced from the integers that currently exist.

kmeisthax•13h ago
What you're describing is not just a problem with GC, but pointers in general. Optimizers would choke on exactly the same scheme.

What compiler writers realized is that pointers are actually not integers, even though we optimize them down to be integers. There's extra information in them we're forgetting to materialize in code, so-called "pointer provenance", that optimizers are implicitly using when they make certain obvious pointer optimizations. This would include the original block of memory or local variable you got the pointer from as well as the size of that data.

For normal pointer operations, including casting them to integers, this has no bearing on the meaning of the program. Pointers can lower to integers. But that doesn't mean constructing a new pointer from an integer alone is a sound operation. That is to say, in your example, recovering the integer portion of y and casting it to a pointer shouldn't be allowed.

There are two ways in which the casting of integers to pointers can be made a sound operation. The first would be to have the programmer provide a suitably valid pointer with the same or greater provenance as the one that provided the address. The other, which C/C++ went with for legacy reasons, is to say that pointers that are cast to integers become 'exposed' in such a way that casting the same integer back to a pointer successfully recovers the provenance.

If you're wondering, Rust supports both methods of sound int-to-pointer casts. The former is uninteresting for your example[0], but the latter would work. The way that 'exposed provenance' would lower to a GC system would be to have the GC keep a list of permanently rooted objects that have had their pointers cast to integers, and thus can never be collected by the system. Obviously, making pointer-to-integer casts leak every allocation they touch is a Very Bad Idea, but so is XORing pointers.

Ironically, if Alloy had done what other Rust GCs do - i.e. have a dedicated Collect trait - you could store x and x^y in a single newtype that transparently recovers y and tells the GC to traverse it. This is the sort of contrived scenario where insisting on API changes to provide a precise collector actually gets what a conservative collector would miss.

[0] If you're wondering what situations in which "cast from pointer and int to another pointer" would be necessary, consider how NaN-boxing or tagged pointers in JavaScript interpreters might be made sound.

IshKebab•6h ago
> If pointers and integers can be freely converted to each other

You can only freely convert integers to pointers with "exposed provenance" in Rust which is currently unstable.

https://doc.rust-lang.org/std/ptr/index.html#exposed-provena...

I find the idea of provenance a bit abstract so it's a lot easier to think about a concrete pointer system that has "real" provenance: CHERI. In CHERI all pointers are capabilities with a "valid" tag bit (it's out-of-band so you can't just set it to 1 arbitrarily). As soon as you start doing raw bit manipulation of the address the tag is cleared and then it can be no longer used as a pointer. So this problem doesn't exist on CHERI.

Also the problem of mistaking integers as pointers when scanning doesn't exist either - you can instead just search for memory where the tag bit is set.

vsgherzi•20h ago
No has seemed to call it out yet but swift uses a form of garbage collection but remains relatively fast. I was against this at first but the more I think about it, I think it has real potential to make lots of hard problems with ownership easier to solve. I think the next big step or perhaps an alternative would be to make changes to restrictions in unsafe rust.

I think the pursuit of safety is a good goal and I could see myself opting into garbage collections for certain tasks.

worik•19h ago
Swift uses reference counting

Slows down every access to objects as reference counts must be maintained

Something weird that I never bothered with to enable circular references

marcianx•16h ago
Reference counted pointers can deference an object (via a strong pointer) without checking the reference count. The reference count is accessed only on operations like clone, destruction, and such. That being said, access via a weak pointer does require a reference count check.
fulafel•11h ago
This sounds different from common refcounting semantics in other languages, is it really so in Swift?

Usually access increases the reference count (to avoid the object getting GC'd while you use it) and weak pointers are the exception where you are prepared for the object reference to suddenly become invalid.

Someone•8h ago
> Usually access increases the reference count

Taking shared ownership increases the reference count, not access.

Someone•8h ago
> Slows down every access to objects as reference counts must be maintained

Definitely not every access. Between an “increase refcount” and an “decrease refcount” you can access an object as many times as you want.

Also:

- static analysis can remove increase/decrease pairs.

- Swift structs are value types, and not reference counted. That means Swift code can have fewer reference-counted objects than similar Java code has garbage-collected objects.

It does perform slower than GC-ed languages or languages such as C and rust, but is easier to write [1] than rust and C and needs less memory than GC-ed languages.

[1] The latest Swift is a lot more complex than the original Swift, but high-level code still can be reasonably easy.

worik•19h ago
I have thought for years Rust needs to bifurcate.

Asyc/await really desperately needs a garbage collector. (See this talk from Rustconf 2025: https://youtu.be/zrv5Cy1R7r4?si=lfTGLdJOGw81bvpu and this blog:https://rfd.shared.oxide.computer/rfd/400)

Rust that uses standard techniques for asynchronous code, or is synchronous, does not. Async/await sucks all the oxygen from asynchronous Rust

Async/await Rust is a different language, probably more popular, and worth pursuing (for somebody, not me) it already has a runtime and the dreadful hacks like (pin)[https://doc.rust-lang.org/std/pin/index.html] that are due to the lack of a garbage collector

What a good idea

sunshowers•18h ago
Hi -- I'm the one who presented the talk -- honored!

I'm curious how you got to "async Rust needs a [tracing] garbage collector" in particular. While it's true that a lot of the issues here are downstream of futures being passive (which in turn is downstream of wanting async to work on embedded), I'm not sure active futures need a tracing GC. Seems to me like Arc or even a borrow-based approach would work, as long as you can guarantee that the future is dropped before the scope exits (which admittedly isn't possible in safe Rust today [0]).

[0]: https://without.boats/blog/the-scoped-task-trilemma/

worik•11h ago
My comment is quite general.

The difficulties with async/await seem to me to be with the fact that code execution starts and stops using "mysterious magic", and it is very hard for the compiler to know what is in, and what is out, of scope.

I am by no means an expert on async/await, but I have programmed asynchronously for decades. I tried using async/await in Rust, Typescript and Dart. In Typescript and Dart I just forget about memory and I pretend I am programming synchronously. Managed memory, runtimes, money in the bank, who is complaining? Not me.

\digression{start} This is where the first problem I had with async/await cropped up. I do not like things that are one thing, and pretend to be another - personally or professionally - and async/await is all about (it seems to me) making asynchronous programming look synchronous. Not only do I not get the point - why? is asynchronous programming hard? - but I find it offensive. That is a personal quibble and not one I expect many others to find convincing I guess I am complaining.... \digression{end}

In Rust I swiftly found myself jumping through hoops, and having to add lots and lots of "magic incantations" none of which I needed in the other languages. It has been a while, and I have blotted out the details.

Having to keep a piece of memory in scope when the scope itself is not in my control made me dizzy. I have not gone back and used async/await but I have done a lot of asynchronous rust programming since, and I will be doing more.

My push for Rust to bifurcate and become two languages is because async/await has sucked up all the oxygen. Definitely from asynchronous Rust programming, but it has wrecked the culture generally. The first thing I do when I evaluate a new crate is to look for "tokio" in the dependencies - and two out of three times I find it. People are using async/await by default.

That is OK, for another language. But Rust, as it stands, is the wrong choice for most of those things. I am using it for real time audio processing and it is the right choice for that. But (e.g) for the IoT lighting controller [tapo](https://github.com/mihai-dinculescu/tapo) it really is not.

I am resigned to my Cassandra role here. People like your good self (much respect for your fascinating talk, much envy for your exciting job) are going to keep trying to make it work. I think it will fail. It is too hard to manage memory like Rust does with a borrow checker with a runtime that inserts and runs code outside the programmer's control. There is a conflict there, and a lot of water is going under the bridge and money down the drain before people agree with me and do what I say...

Either that or I will be proved wrong

Lastly I have to head off one of the most common, and disturbing, counter (non) arguments: I absolutely do not accept that "so many smart people are using it it must be OK". Many smart people do all sorts of crazy things. I am old enough to have done some really crazy things that I do not like to recall, and anyway, explain Windows - smart people doing stupid things if ever

fithisux•19h ago
Very important paper.
sebastianconcpt•19h ago
I'm curious about the applicability.

If memory management is already resolved with the borrow checker rules, then what case can make you want a GC in a Rust program?

trueismywork•18h ago
Lock free programming..
dajonker•17h ago
Implementing a doubly linked list without either unsafe or some very confusing code that could arguably win an obfuscation contest.
antonvs•13h ago
> If memory management is already resolved with the borrow checker rules

Even in standard Rust, this only applies to a subset of memory management. That’s why Rust supports reference counting, for example, which is an alternative to borrow checking. But one could make the case that automatic garbage collection was developed specifically to overcome the problems with reference counting. Given that context, GC in Rust makes perfect sense.

FridgeSeal•17h ago
I don’t understand the desire to staple a GC into Rust.

If you want this, you might just…want a different language? Which is fine and good! Putting a GC on Rust feels like putting 4WD tyres on a Ferrari sports car and towing a caravan with it. You could (maybe) but it feels like using the wrong tool for the job.

dajonker•17h ago
If I understand the article correctly it's for those cases where you want memory safety (i.e. not using "unsafe") but where the borrow checker is really hard to work with such as a doubly linked list, where nodes can point to each other.

For the rest you'd still use non-GC rust.

FridgeSeal•17h ago
I just foresee it become irrevocably viral, as it becomes the “meh, easier” option, and then suddenly half your crates depend on it, and then you’re losing one of the major advantages of the language.
zozbot234•16h ago
A doubly linked list is not the optimal case for GC. It can be implemented with some unsafe code, and there are approaches that implement it safely with GhostCell (or similar facilities, e.g. QCell) plus some zero-overhead (mostly) "compile time reference counting" to cope with the invariants involved in having multiple references simultaneously "own" the data. See e.g. https://github.com/matthieu-m/ghost-collections for details.

Where GC becomes necessary is the case where even static analysis cannot really mitigate the issue of having multiple, possibly cyclical references to the same data. This is actually quite common in some problem domains, but it's not quite as simple as linked lists.

lmm•13h ago
Adding a GC to Rust might honestly be easier than getting the OCaml ecosystem to adopt something that works as well as cargo. It's tragic, but that's the world we live in.
debugnik•2h ago
Of all the things I'd change about OCaml, dune is very much down the list. It's flat out better than cargo in that it's an actual build system with build rules driven by file dependencies, not simply a glorified frontend for a compiler. Not great, but better.

Now, upstreaming OxCaml's unboxed types and stack allocations? That might actually take longer than adding a GC to Rust.

rurban•13h ago
Memory safety would be a good idea indeed. Just to get rid of the unsafeties in the stdlib and elsewhere. With this would also go type-safety, because there will be no more unsafe hacks. Concurrency safety would another hill to die on, as they choose not to approach this goal with their blocking IO and locks all over.