> The first performance issue we hit was dynamic dispatch to assembly, as these calls are very hot. We then began adding inner mutability when necessary but had to carefully avoid contention. We found as we removed pointers and transitioned to safe Rust types that bounds checks increasingly became a larger factor. Buffer and structure initialization was also an issue as we migrated to safe, owned Rust types.
Based on their conclusions², each of those issues amounts to a few percentage points (total: 11%).
Based on the article, it seems that with highly complex logic, safety does come at the cost of raw performance, and it can be very hard to compensate (withing the safety requirements).
[¹]: https://www.memorysafety.org/blog/rav1d-performance-optimiza...
[²]: https://www.memorysafety.org/blog/rav1d-performance-optimiza...
In Rust. These are Rust issues, not issues with safety in general.
The issue with bound checks for exemple is entirely avoidable if you prove that all your calls are within bounds before compiling, same thing for partial initialization.
The core issue is that the strategies Rust adopts to ensure memory safety are neither a panacea nor necessarily the right solution in every case. That being said, I think it's a very nice idea to try to write a decoder in Rust and have a bounty for optimization. Rust is popular so work on producing fast and safe Rust is good.
The situation is more nuanced. The article dedicates a section to it:
> The general idea in eliding unnecessary bounds checks was that we needed to expose as much information about indices and slice bounds to the compiler as possible. We found many cases where we knew, from global context, that indices were guaranteed to be in range, but the compiler could not infer this only from local information (even with inlining). Most of our effort to elide bounds checks went into exposing additional context to buffer accesses.
(extensive information given in that section)
Which is exactly this for video decoders.
In fact, a bounty will show us who really is serious or not and even some interviewers don't have a clue why they ask for 'optimal algorithms' in the first place. Maybe just to show off that they Googled or used ChatGPT for the answer before the interview.
Well, you can't do it for this one. The one who improves this and can explain their changes will have a impressive show of expertise which puts lots of so-called 'SWEs' to shame.
if you can fund someone for at least 6 months of work it becomes reasonable to work for these.
Edit: Looks like many people are not understanding how overhead works. Your take home pay over a year is just over $100,000 since you end up so much unpaid time looking for the next gig.
Does that mean that rust devs earn 60k monthly minimum?
Sure when you work you make a lot of money, but you end up needing to spend the vast majority of your time looking for the next gig and that is all unpaid.
What kind of lunatic would pay that kind of money for a dev for a single week
I'm really surprised that a 5% performance degradation would lead people to choose C over Rust, especially for something like a video codec. I wonder if they really care or if this is one of those "we don't want to use Rust because of silly reasons and here's are reasonable-sounding but actually irrelevant technical justification"...
So I doubt it's any religious thing between c and Rust.
A farm of isolated machines doing batch transcoding jobs? Give me every single % improvement you can. They can get completely owned and still won't be able to access anything useful or even reach out to the network. A crash/violation will be registered and a new batch will get a clean slate anyway.
It's got bounds checking, lifetimes, shared access checks, enforced synchronisation, serious typed enums (not enforced but still helpful), explicit overflow behaviour control, memory allocation management, etc. etc. to help with safety. Far from "that's all".
> You can also write totally safe programs in C,
No you can't beyond trivial levels in practice. Even super human exceptions like DJB made mistakes and statistically nobody here is even close to DJB.
> use formal verification like this crypto library does
"Beware of bugs in the above code; I have only proved it correct, not tried it." -D.Knuth (That is - you can make mistakes in the spec too - see how many issues the verified sel4 had https://github.com/seL4/seL4/blob/master/CHANGES.md )
See also: Gamers ready to shell out $$$ because of 1% lows.
1. Desktop - If both implementations run the same but one is faster, you run the faster one to stop the decode spluttering on those borderline cases.
2. Embedded - Where resources are limited, you still go for the faster one, even if it might one day leas to a zero day because you've weighed up the risk and reducing the BOM is an instant win and trying to factor in some unknown code element isn't.
3. Server - You accept media from unknown sources, so you are sandboxed anyway. Losing 5% of computing resources adds up to big $ over a year and at enough scale. At Youtube for example it could be millions of dollars a year of compute doing a decode and then re-encode.
Some other resistances:
1. Energy - If you have software being used in many places over the world, that cost saving is significant in terms of energy usage.
2. Already used - If the C implementation is working without issue, there would be high resistance to spend engineering time to put a slower implementation in.
3. Already C/C++ - If you already have a codebase using the same language, why would you now include Rust into your codebase?
4. Bindings - Commonly used libraries use the C version and are slow to change. The default may remain the C version in the likes of ffmpeg.
I'm really surprised that because something is in Rust and not in C, it would lead people to ignore a 5% performance degradation.
Seriously... when you get something that's 5% faster especially in the video codec space, why would you dismiss it just because it's not in your favorite language... That does sound like a silly reason to dismiss a faster implementation.
Kind of a strawman argument though. The question is, is the 5% difference (today) worth the memory safety guaranties? IE, would you be OK if your browser used 5% more power displaying video, if it meant you couldn't be hacked via a memory safety bug.
Because it also means your battery drains 5% faster, it gets hotter, you will need to upgrade your media player device etc etc etc.
Seen on the scale of the actual deployment, this is HUGE.
Also glancing over the implementation of rav1d, it seems to have some C dependencies, but also unsafe code in some places. This to me makes banging the drum of memory safety - as it is often done whenever a Rust option is discussed, for obvious reasons since it's one of the main selling point of the language - a bit moot here.
Why especially video decoders?
> I wonder if they really care or if this is one of those "we don't want to use Rust because of silly reasons and here's are reasonable-sounding but actually irrelevant technical justification"...
I would have thought video decoders are specifically one of the few cases where performance really is important enough to trump language guaranteed security. They're widely deployed, and need to work in a variety of environments; everything from low power mobile devices to high-throughput cloud infrastructure. They also need to be low latency for live broadcast/streaming.
That's not to say security isn't a concern. It absolutely is, especially given the wide variety of deployment targets. However, video decoders aren't something that necessarily need to continually evolve over time. If you prioritize secure coding practices and pair that with some formal/static analysis, then you ought to be able to squeeze out more performance than Rust. For example, Rust may be inserting bounds checks on repeated access — where as a C program could potentially validate this sort of information just the once up front and pass the "pre-validated" data structure around (maybe even across threads) "knowing" that it's valid data. Yes, there's a security risk involved, but it may be worth it.
It has to be, you know correct?
> The contest is open to individuals or teams of individuals who are legal residents or citizens of the United States, United Kingdom, European Union, Canada, New Zealand, or Australia.
So most countries where putting in the effort would actually be worth the bounty offered is a no go...
I understand.
degurechaff•4h ago
qalmakka•2h ago
rationably•2h ago
> The contest is open to individuals or teams of individuals who are legal residents or citizens of the United States, United Kingdom, European Union, Canada, New Zealand, or Australia.
washadjeffmad•2h ago
Turks are Asian. Russians are Asian. Indians are Asian. Etc.
They were probably just wondering why it's limited to Five Eyes + EU.
bArray•1h ago
greggsy•1h ago
> ...not located in the following jurisdictions: Cuba, Iran, North Korea, Russia, Syria, and the following areas of Ukraine: Donetsk, Luhansk, and Crimea.
Showing that the only explicit exclusions are aimed at the usual gang of comprehensively sanctioned states.
[1] https://www.memorysafety.org/rav1d-bounty-official-rules/
Still doesn't explain why the rest of the world isn't in the inclusions list. Maybe they don't want to deal with a language barrier by sticking to the Anglosphere... plus EU?
bpicolo•1h ago
cess11•2h ago
GolDDranks•2h ago
zinekeller•56m ago