Counting CPU cycles as if it's an accomplishment seems irrelevant in a world where 50% of modern CPU resources are allocated toward UI eye candy.
If modern CPUs are so power efficient and have so many spare cycles to allocate to e.g. eye candy no one asked for, then no one is counting and the comparison is irrelevant.
“Best” is measured along a lot more axis than just performance. And you don’t always get to choose what format you use. It may be dictated to you by some 3rd party you can’t influence.
- better compression ratio than gzip
- faster compression than many better-than-gzip competitors
- lower CPU/RAM usage for the same compression ratio/time
This is a niche, but it does crop up sometimes. The downside to bzip2 is that it is slow to decompress, but for write-heavy workloads, that doesn't matter too much.[0]: https://meta.wikimedia.org/wiki/Data_dump_torrents#English_W...
The same about exported symbols and being able to compile to wasm easily.
What Rust-the-language does offer is temporal safety (i.e. the borrow checker), and there's no easy way to get that in C.
And that's assuming they aren't lying about the counting: https://desuarchive.org/g/thread/104831348/#q104831479
Indeed. You know the react-angular-vue nevermind is churn? It appears that the trend of people pushing stuff because it benefit their careers is coming to the low level world.
I for one still find it mistifying that Linus torvals let this people into the kernel. Linus, who famous banned c++ from the kernel not because of c++ in itself, but to ban c++ programmer culture.
That's the kind of attitude that leads to 50% of modern CPU resources being allocated toward UI eye candy.
Attitude which leads to electron apps replacing native ones, and I hate it. I am not buying better cpus and more ram just to have it wasted like this
In fact Jevons Paradox: When technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand - essentially, efficiency improvements can lead to increased consumption rather than the intended conservation. [^2][^3]
[^1]: https://www.comp.nus.edu.sg/~damithch/quotes/quote27.htm
[^2]: https://www.greenchoices.org/news/blog-posts/the-jevons-para...
Hardware efficiency just gives more room for software to bloat. The pain level is a human factor and stays the same.
So time to adapt Wirths law: Software gets slower >exactly as much< as hardware gets faster
I can read and write C code from the times when there weren't any competitive alternatives. I have no problem reading or writing Rust code. In fact it communicates more to me than C code does or can and I can immediately understand more about the code written in Rust than I can about code written in C.
Yes, and they hated it and "worked hard to kill it" per Jordan Petridis. Note that the maintainers of X in the Wayland era are not really the same people as the original authors of X.
And there is likely a reason the original people didn't continue to work on it.
In OSS every hour of volunteer time is precious Manna from heaven, flavored with unicorn tears. So any way to remove Toil and introduce automation is gold.
Rust's strict compiler and an appropriate test suite guarantees a level of correctness far beyond C. There's less onus on the reviewer to ensure everything still works as expected when reviewing a pull request.
It's a win-win situation.
Ironically there is one CVE reported in the bzip2 crate
[1] https://app.opencve.io/cve/?product=bzip2&vendor=bzip2_proje...
They're releasing 0.6.0 today :>
https://trifectatech.org/blog/translating-bzip2-with-c2rust/
I suspect attempting to debug it would be a nightmare though. Given the LLM could hallucinate anything anywhere you’d likely waste a ton of time.
I suspect it would be faster to just try and write a new implementation based on the spec and debug that against the test suite. You’d likely be closer.
In fact, since they used c2rust, they had a perfectly working version from the start. From there they just had to clean up the Rust code and make sure it didn’t break anything. Clearly the best of the three options.
No. You need to audit for correctness in additional to safety.
https://github.com/immunant/c2rust reportedly works pretty well. Blog post from a few years ago of them transpiling quake3 to rust: https://immunant.com/blog/2020/01/quake3/. The rust produced ain't pretty, but you can then start cleaning it up and making it more "rusty"
(ie, is it some kind of rust auto-simd thing, did they use the opportunity to hand optimize other parts or is it making use of newer optimized libraries, or... other)
Linked from the article is another on how they used c2rust to do the initial translation.
https://trifectatech.org/blog/translating-bzip2-with-c2rust/
For our purposes, it points out places where the code isn’t very optimal because the C code has no guarantees on the ranges of variables, etc.
It also points out a lot of people just use ‘int’ even when the number will never be very big.
But with the proper type the Rust compiler can decide to do something else if it will perform better.
So I suspect your idea that it allows unlocking better optimizations though more knowledge is probably the right answer.
reminds me a bit of the days when perl programs would often outrun native c/c++ for common tasks because ultimately they had the most efficient string processing libraries baked into the language built-ins.
how is space efficiency? last i checked, because of big libraries and ease of adding them to projects, a lot of rust binaries tend to be much larger than their traditional counterparts. how might this impact overall system performance if this trade-off is made en-masse? (even if more ram is added to counteract loss of vm page cache, does it also start to impact locality and cache utilitization?)
i'd be curious how something like redox benchmarks against traditional linux for real world workloads and interactivity measures.
[0] https://bcantrill.dtrace.org/2018/09/28/the-relative-perform...
if all the binaries are big, does it start to crowd out cache space? does static linking make sense for full systems?
So more than the absolute size of the binary, you should worry about how much is actually in the 'active set'.
https://nlnet.nl/project/current.html https://www.sovereign.tech/programs/fund
There's been good support over the last couple of years to fund rewriting critical internet & OS tools into safer languages like Rust.
Eg BGP in Rust https://www.nlnetlabs.nl/projects/routing/rotonda/
It's 2025, and most programs like Python are stuck at one CPU core.
In general: People usually aren't too concerned about it.
> Why bother working on this algorithm from the 90s that sees very little use today?
What's in use nowadays ? zstd ?
ahh saw this: https://quixdb.github.io/squash-benchmark/
Edit : it probably doesn't.
dralley•7mo ago
Fedora recently swapped the original Adler zlib implementation with zlib-ng, so that sort of thing isn't impossible. You just need to provide a C ABI compatible with the original one.
masfuerte•7mo ago
rlpb•7mo ago
How does this interact with dynamic linking? Doesn't the current Rust toolchain mandate static linking?
sedatk•7mo ago
arcticbull•7mo ago
Use crate-type=["cdylib"]
nicoburns•7mo ago
timeon•7mo ago
bluGill•7mo ago
mjevans•7mo ago
filmor•7mo ago
conradev•7mo ago
alxhill•7mo ago
eru•7mo ago
emidln•7mo ago
The sum size of a dynamic binary plus the dynamic libraries may be larger than one static linked binary, but whether that holds for more static binaries (2, 3, or 100s) depends on the surface area your application uses of those libraries. It's relatively common to see certain large libraries only dynamically linked, with the build going to great lengths to build certain libraries as shared objects with the executables linking them using a location-relative RPATH (using the $ORIGIN feature) to avoid the extra binary size bloat over large sets of binaries.
IshKebab•7mo ago
They are often conflated because you can't have shared dependencies with static linking, and bundling dynamically linked libraries is uncommon in FOSS Linux software. It's very common on Windows or with commercial software on Linux though.
guappa•7mo ago
mandarax8•7mo ago
guappa•7mo ago
tialaramex•7mo ago
[In case anybody is confused by your utterance, yes of course this works in Rust]
guappa•7mo ago
I eagerly await the results!
tux3•7mo ago
Here's nu, a shell in Rust:
And here's the Debian variant of ash, a shell in C:guappa•7mo ago
The problem of increased RAM requirements and constant rebuilds are still very real, if only slightly less big because of dynamically linking C.
bronson•7mo ago
Your second paragraph is either a meaningless observation on the difference between static and dynamic linking or also incorrect. Not sure what your intent was.
guappa•7mo ago
whytevuhuni•7mo ago
IshKebab•7mo ago
Also Go does produce fully static binaries on Linux and so it's at least reasonable to incorrectly guess that Rust does the same.
Definitely shouldn't be so confident though!
CamouflagedKiwi•7mo ago
guappa•7mo ago
IshKebab•7mo ago
IshKebab•7mo ago
Mac does this, and Windows pretty much does it too. There was an attempt to do this on Linux with the Linux Standard Base, but it never really worked and they gave up years ago. So on Linux if you want a truly portable application you can pretty much only rely on the system providing very old versions of glibc.
guappa•7mo ago
It's hardly a fair comparison with old linux distros when osx certainly will not run anything old… remember they dropped rosetta, rosetta2, 32bit support, opengl… (list continues).
And I don't think you can expect windows xp to run binaries for windows 11 either.
So I don't understand why you think this is perfectly reasonable to expect on linux, when no other OS has ever supported it.
Care to explain?
guappa•7mo ago
TuxSH•7mo ago
This way you can get small binaries with readable assembly.
eru•7mo ago
Only relinking, which you can make cheap for your non-release builds.
Dynamic linking needs relinking everytime you run the program!
connicpu•7mo ago
quotemstr•7mo ago
pjmlp•7mo ago
Plenty do not, especially on Apple and Microsoft platforms because they always favoured other approaches to bare bones UNIX support on their dynamic linkers, and C++ compilers.
pjc50•7mo ago
(which actually slightly pre-dates C++, I think?)
Someone•7mo ago
No. C++ is from 1985 (https://en.wikipedia.org/wiki/C%2B%2B), COM from 1993 (https://en.wikipedia.org/wiki/Component_Object_Model)
quotemstr•7mo ago
pjmlp•7mo ago
This is why Apple made such a big point of having a better ABI approach on Swift, after their experience with C++ and Objective-C.
While on Microsoft side, you will notice that all talks from Victor Ciura on Rust conferences have dealing with ABI as one of the key points Microsoft is dealing with in the context of Rust adoption.
zozbot234•7mo ago
Someone•7mo ago
Easy, not free. In many languages, extra work is needed to provide a C interface. Strings may have to be converted to zero terminated byte arrays, memory that can be garbage collected may have to be locked, structs may mean having to be converted to C struct layout, etc.
wmf•7mo ago
egorfine•7mo ago
[1] https://discourse.ubuntu.com/t/adopting-sudo-rs-by-default-i...
tiffanyh•7mo ago
https://uutils.github.io/
cocoa19•7mo ago
The performance boost in tools like ripgrep and tokei is insane compared to the tools they replace (grep and cloc respectively).
egorfine•7mo ago
ripgrep is an excellent tool. But it's not a grep replacement. And should not ever be.
mprovost•7mo ago
egorfine•7mo ago
Please forgive me my ignorance but what's wrong with bash? I'm still using it on all servers and workstations, I constantly write scripts for it, some fairly complex. It's not an obsolete project and it looks like a mainstream shell for me. Am I wrong?
Update: yeah, I realize now that this was about the original Bourne Shell, not bash.
dagw•7mo ago
egorfine•7mo ago
mprovost•7mo ago
egorfine•7mo ago
egorfine•7mo ago
You have a point here. I have to agree.
"X but rewritten in Z" is a terrible marketing, though. Makes me instantly want to hate the tool and its authors. (Love rust. Hate the vibe).
mprovost•7mo ago
egorfine•7mo ago
Compare "I have typed a setuid() wrapper in rust" vs "I'm the author of sudo-rs".
scripturial•7mo ago
How would you even tell people you made a better rust crate without using the word “rust?”
egorfine•7mo ago
Rust folks are claiming excellent interoperability with C binaries. Why the need for a rewrite then?
swiftcoder•7mo ago
Interoperability runs both ways, everyone currently taking a dependency on the C library can swap in the rust library in its place and see the same benefits
throwawaymaths•7mo ago
scripturial•7mo ago
throwawaymaths•7mo ago
hulitu•7mo ago
or Korn shell.
burntsushi•7mo ago
https://github.com/BurntSushi/ripgrep/blob/master/FAQ.md#can...
dagw•7mo ago
Those "core standards" that you talk about didn't spring fully formed from the earth. They came about from competition and beating out and replacing the old "core standards" that lots of people argued very strongly for should not ever be replaced. When I was starting out my career I was told by experienced people that I should not learn to rely on the GNU tool features, since they're far from ubiquitous and probably won't be installed on most systems I'll be working on.
egorfine•7mo ago
jelder•7mo ago
Tokei _finishes_ before cloc can print its help text. I wrote this post in less time than it took `cloc .` to count all the files in my project, probably because it doesn't know to ignore `target/`.
arp242•7mo ago
cloc is a Perl script, so it has the interpreter startup time.
coldpie•7mo ago
2) I think the MIT license was a mistake. These are often cloning GNU utilities, so referencing GNU source in its original language and then re-implementing it in Rust would be the obvious thing to do. But porting GPL-licensed code to an MIT licensed project is not allowed. Instead, the utilities must be re-implemented from scratch, which seems like a waste of effort. I would be interested in doing the work of porting GNU source to Rust, but I'm not interested in re-writing them all from scratch, so I haven't contributed to this project.
hueho•7mo ago
monster_truck•7mo ago
coldpie•7mo ago
Scuds•7mo ago
1 - GNU utilities is ancient crufty #IFDEF'd C that's been in maintenance mode for decades. You want code to handle quirks of Tru64 and Ultrix? You got it.
2 - Waving your hands around 'the community will take care of it' is magical thinking. C developers don't grow on trees. C tooling is kinda weird and doesn't resemble anything modern - good luck finding enough VOLUNTEER C developers to make your goals happen.
deknos•7mo ago
GuB-42•7mo ago
I don't know of a sed equivalent, but I guess that would be easy to implement as Rust has good regex support (see ripgrep), and 90%+ of sed usage is search-and-replace. The other commands don't look hard to implement and because they are not used as much, optimizing these is less of a priority.
I don't know about awk, it is a full programming language, but I guess it is far from an impossible task to implement.
Now the real hard part is making a true, bug-for-bug compatible replacement of the GNU version of these tools, but while good to have, it is not strictly necessary. For example, Busybox is very popular, maybe even more so than GNU in terms of number of devices, and it has its own (likely simplified) version of grep, sed and awk.
scns•7mo ago
https://github.com/chmln/sd
egorfine•7mo ago
kpcyrd•7mo ago
https://github.com/trifectatechfoundation/libbzip2-rs/blob/8...
I'm not familiar enough with the symbols of bzip2 to say anything about ABI compatibility.
I have a toy project to explore things like that, but it's difficult to set aside the amount of time needed to maintain an implementation of the GNU operating system. I would welcome pull requests though:
https://github.com/kpcyrd/platypos
Pesthuf•7mo ago
LinusU•7mo ago
> [...] doesn't that mean the implementation is just... finished?
I don't think that it _necessarily_ means that, e.g. all projects that haven't had a release since 2019 aren't finished? Probably most of them are simply abandoned?
On the other hand, a finished implementation is certainly a _possible_ explanation for why there have been no releases.
In this specific case, there are a handful of open bugs on their issue tracker. So that would indicate that the project isn't finished.
ref: https://sourceware.org/bugzilla/buglist.cgi?product=bzip2
restalis•7mo ago
Although much code can be optimized to get it to run 10-15% faster, if that comes at the expense of legibility then such "feature" get rejected nowadays. Translating an existing codebase into a language that makes things more difficult¹ and (because of that) has (and most likely will have) fewer engineers willing to working in it looks very much akin to applying legibility-affecting optimizations to me.
¹ ...and not only. I've already described in other comments some of the issues I see with it as a software development tool, here https://news.ycombinator.com/item?id=31565024 here https://news.ycombinator.com/item?id=33390634 and here https://news.ycombinator.com/item?id=42241516
LinusU•7mo ago
Makes sense, and I'd probably make the same call if I was a maintainer and someone submitted a patch which increased performance at the cost of maintainability...
> Translating an existing codebase into a language that makes things more difficult¹ and (because of that) has (and most likely will have) fewer engineers willing to working in it looks very much akin to applying legibility-affecting optimizations to me.
Here I have to personally disagree. I think that Rust is easier than both C and C++. Especially when coming into an already existing project.
The chance of me contributing to a Rust project is higher than to a C project, because I feel more comfortable in knowing that my code is "correct". It's also easier to match the "style" since most Rust projects follow the same structures and patterns, whereas in C it can vary wildly.
E.g. I contributed my first feature to Valkey (Redis fork, C codebase) recently, and figuring out how the tests worked took me quite some time. In fact, in the end I couldn't figure out how to run a specific test so I just ran all tests and grepped the output for my test name. But the tests take tens of minutes to run so this was sub-optimal. On the other hand, 99% of all Rust projects use `cargo test`, and to run a single test I can just click the play button in my editor (Zed) that shows up next to the test (or `cargo test "testname"`).
(with this said, I think that Valkey is a really well structured code base! Great work on their part)
Anyhow, this is just to illustrate my experience. I'm sure that for someone more used to C or C++ they would be more productive in that. And I could go on for ages on all the features that make me miss Rust every day at work when I have to work in other languages, especially algebraic data types!
moomin•7mo ago