frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

FFmpeg devs boast of another 100x leap thanks to handwritten assembly code

https://www.tomshardware.com/software/the-biggest-speedup-ive-seen-so-far-ffmpeg-devs-boast-of-another-100x-leap-thanks-to-handwritten-assembly-code
178•harambae•5h ago

Comments

shmerl•5h ago
Still waiting for Pipewire + xdg desktop portal screen / window capture support in ffmpeg CLI. It's been dragging feet forever with it.
Aardwolf•5h ago
The article somtimes says 100x, other times it says 100% speed boost. E.g. it says "boosts the app’s ‘rangedetect8_avx512’ performance by 100.73%." but the screenshot shows 100.73x.

100x would be a 9900% speed boost, while a 100% speed boost would mean it's 2x as fast.

Which one is it?

pizlonator•5h ago
The ffmpeg folks are claiming 100x not 100%. Article probably has a typo
k_roy•2h ago
That would be quite the percentage difference with 100x
MadnessASAP•5h ago
100x to the single function 100% (2x) to the whole filter
torginus•4h ago
I'd guess the function operates of 8 bit values judging from the name. If the previous implementation was scalar, a double-pumped AVX512 implementation can process 128 elements at a time, making the 100x speedup plausible.
ethan_smith•3h ago
It's definitely 100x (or 100.73x) as shown in the screenshot, which represents a 9973% speedup - the article text incorrectly uses percentage notation in some places.
pavlov•4h ago
Only for x86 / x86-64 architectures (AVX2 and AVX512).

It’s a bit ironic that for over a decade everybody was on x86 so SIMD optimizations could have a very wide reach in theory, but the extension architectures were pretty terrible (or you couldn’t count on the newer ones being available). And now that you finally can use the new and better x86 SIMD, you can’t depend on x86 ubiquity anymore.

Aurornis•4h ago
AVX512 is a set of extensions. You can’t even count on an AVX512 CPU implementing all of the AVX512 instructions you want to use, unless you stick to the foundation instructions.

Modern encoders also have better scaling across threads, though not infinite. I was in an embedded project a few years ago where we spent a lot of time trying to get the SoC’s video encoder working reliably until someone ran ffmpeg and we realized we could just use several of the CPU cores for a better result anyway

AaronAPU•4h ago
When I spent a decade doing SIMD optimizations for HEVC (among other things), it was sort of a joke to compare the assembly versions to plain c. Because you’d get some ridiculous multipliers like 100x. It is pretty misleading, what it really means is it was extremely inefficient to begin with.

The devil is in the details, microbenchmarks are typically calling the same function a million times in a loop and everything gets cached reducing the overhead to sheer cpu cycles.

But that’s not how it’s actually used in the wild. It might be called once in a sea of many many other things.

You can at least go out of your way to create a massive test region of memory to prevent the cache from being so hot, but I doubt they do that.

torginus•4h ago
Sorry for the derail, but it sounds like you have a ton of experience with SIMD.

Have you used ISPC, and what are your thoughts on it?

I feel it's a bit ridiculous that in this day and age you have to write SIMD code by hand, as regular compilers suck at auto-vectorizing, especially as this has never been the case with GPU kernels.

almostgotcaught•2h ago
> Have you used ISPC

No professional kernel writer uses Auto-vectorization.

> I feel it's a bit ridiculous that in this day and age you have to write SIMD code by hand

You feel it's ridiculous because you've been sold a myth/lie (abstraction). In reality the details have always mattered.

CyberDildonics•9m ago
ISPC is a lot different from C++ compiler auto vectorization and it works extremely well. Have you tried it or not? If so where does it actually fall down? It warns you when doing slow stuff like gathers and scatters.
capyba•1h ago
Personally I’ve never been able to beat gcc or icx autovectorization by using intrinsics; often I’m slower by a factor of 1.5-2x.

Do you have any wisdom you can share about techniques or references you can point to?

jesse__•12m ago
I recently finished a 4 part series about vectorizing perlin noise.. from the very basics up to beating the state-of-the-art by 1.8x

https://scallywag.software/vim/blog/simd-perlin-noise-i

izabera•2h ago
ffmpeg is not too different from a microbenchmark, the whole program is basically just: while (read(buf)) write(transform(buf))
fuzztester•1h ago
the devil is in the details (of the holy assembly).

thus sayeth the lord.

praise the lord!

yieldcrv•2h ago
> what it really means is it was extremely inefficient to begin with

I care more about the outcome than the underlying semantics, to me thats kind of a given

jauntywundrkind•3h ago
Kind of reminds me of Sound Open Firmware (SOF), which can compile with e8ther unoptimized gcc, or using the proprietary Cadence XCC compiler that can can use the Xtensa HiFi SIMD intrinsics.

https://thesofproject.github.io/latest/introduction/index.ht...

tombert•3h ago
Actually a bit surprised to hear that assembly is faster than optimized C. I figured that compilers are so good nowadays that any gains from hand-written assembly would be infinitesimal.

Clearly I'm wrong on this; I should probably properly learn assembly at some point...

mhh__•2h ago
Compilers are extremely good considering the amount of crap they have to churn through but they have zero information (by default) about how the program is going to be used so it's not hard to beat them.
haiku2077•2h ago
If anyone is curious to learn more, look up "profile-guided optimization" which observes the running program and feeds that information back into the compiler
mananaysiempre•2h ago
Looking at the linked patches, you’ll note that the baseline (ff_detect_range_c) [1] is bog-standard scalar C code while the speedup is achieved in the AVX-512 version (ff_detect_rangeb_avx512) [2] of the same computation. FFmpeg devs prefer to write straight assembly using a library of vector-width-agnostic macros they maintain, but at a glance the equivalent code looks to be straightforwardly expressible in C with Intel intrinsics if that’s more your jam. (Granted, that’s essentially assembly except with a register allocator, so the practical difference is limited.) The vectorization is most of the speedup, not the assembly.

To a first approximation, modern compilers can’t vectorize loops beyond the most trivial (say a dot product), and even that you’ll have to ask for (e.g. gcc -O3, which in other cases is often slower than -O2). So for mathy code like this they can easily be a couple dozen times behind in performance compared to wide vectors (AVX/AVX2 or AVX-512), especially when individual elements are small (like the 8-bit ones here).

Very tight scalar code, on modern superscalar CPUs... You can outcode a compiler by a meaningful margin, sometimes (my current example is a 40% speedup). But you have to be extremely careful (think dependency chains and execution port loads), and the opportunity does not come often (why are you writing scalar code anyway?..).

[1] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346725.h...

[2] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346726.h...

kasper93•2h ago
Moreover the baseline _c function is compiled with -march=generic and -fno-tree-vectorize on GCC. Hence it's the best case comparison for handcrafted AVX512 code. And while it's is obviously faster and that's very cool, boasting the 100x may be misinterpreted by outsider readers.

I was commenting there with some suggested change and you can find more performance comparison [0].

For example with small adjustment to C and compiling it for AVX512:

  after (gcc -ftree-vectorize --march=znver4)
  detect_range_8_c:                                      285.6 ( 1.00x)
  detect_range_8_avx2:                                   256.0 ( 1.12x)
  detect_range_8_avx512:                                 107.6 ( 2.65x)
Also I argued that it may be a little bit misleading to post comparison without stating the compiler and flags used for said comparison [1].

P.S. There is related work to enable -ftree-vectorize by default [2]

[0] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346813.h...

[1] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346794.h...

[2] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346439.h...

brigade•1h ago
It's AVX512 that makes the gains, not assembly. This kernel is simple enough that it wouldn't be measurably faster than C with AVX512 intrinsics.

And it's 100x because a) min/max have single instructions in SIMD vs cmp+cmov in scalar and b) it's operating in u8 precision so each AVX512 instruction does 64x min/max. So unlike the unoptimized scalar that has a throughput under 1 byte per cycle, the AVX512 version can saturate L1 and L2 bandwidth. (128B and 64B per cycle on Zen 5.)

But, this kernel is operating on an entire frame; if you have to go to L3 because it's more than a megapixel then the gain should halve (depending on CPU, but assuming Zen 5), and the gain decreases even more if the frame isn't resident in L3.

saati•38m ago
The AVX2 version was still 64x faster than the C one, so AVX-512 is just 50% improvement over that. Hand vectorized assembly is very much the key to the gains.
mafuy•1h ago
If you ever dabble more closely in low level optimization, you will find the first instance of the C compile having a brain fart within less than an hour.

Random example: https://stackoverflow.com/questions/71343461/how-does-gcc-no...

The code in question was called quadrillions of times, so this actually mattered.

MobiusHorizons•1h ago
Almost all performance critical pieces of c/c++ libraries (including things as seemingly mundane as strlen) use specialized hand written assembly. Compilers are good enough for most people most of the time, but that’s only because most people aren’t writing software that is worth optimizing to this level from a financial perspective.
jesse__•14m ago
It's extremely easy to beat the compiler by dropping down to SIMD intrinsics. I recently wrote a 4 part .. guide? ..

https://scallywag.software/vim/blog/simd-perlin-noise-i

cpncrunch•2h ago
Article is unclear what will actually be affected. It mentions "rangedetect8_avx512" and calls it an obscure function. So, what situations is it actually used for, and what is the real-time improvement in performance for the entire conversion process?
brigade•1h ago
It's not conversion. Rather, this filter is used for video where you don't know whether the pixels are video or full range, or whether the alpha is premultiplied, and determining that information. Usually so you can tag it correctly in metadata.

And the function in question is specifically for the color range part.

cpncrunch•54m ago
It's still unclear from your explanation how it's actually used in practice. I run thousands of ffmpeg conversions every day, so it would be useful to know how/if this is likely to help me.

Are you saying that it's run once during a conversion as part of the process? Or that it's a specific flag that you give, it then runs this function, and returns output on the console?

(Either of those would be a one-time affair, so would likely result in close to zero speed improvement in the real world).

brigade•38m ago
This is a new filter that hasn’t even been committed yet, it only runs if explicitly specified, and would only ever be specified by someone that already knows that they don’t know the characteristics of their video.

So you wouldn’t ever run this.

ivanjermakov•2h ago
Related: ffmpeg's guide to writing assembly: https://news.ycombinator.com/item?id=43140614

Show HN: X11 desktop widget that shows location of your network peers on a map

https://github.com/h2337/connmap
53•h2337•2h ago•30 comments

Agents built from alloys

https://xbow.com/blog/alloy-agents/
44•summarity•2h ago•17 comments

Staying cool without refrigerants: Next-generation Peltier cooling

https://news.samsung.com/global/interview-staying-cool-without-refrigerants-how-samsung-is-pioneering-next-generation-peltier-cooling
184•simonebrunozzi•6h ago•133 comments

XMLUI

https://blog.jonudell.net/2025/07/18/introducing-xmlui/
458•mpweiher•12h ago•243 comments

New colors without shooting lasers into your eyes

https://dynomight.net/colors/
265•zdw•3d ago•72 comments

Log by time, not by count

https://johnscolaro.xyz/blog/log-by-time-not-by-count
13•JohnScolaro•1h ago•7 comments

iMessage integration in Claude can hijack the model to do anything

https://www.generalanalysis.com/blog/imessage-stripe-exploit
25•rhavaeis•1h ago•15 comments

The Genius Device That Rocked F1

https://www.youtube.com/watch?v=FhmLb2DhNYM
27•brudgers•3h ago•1 comments

Stdio(3) change: FILE is now opaque (OpenBSD)

https://undeadly.org/cgi?action=article;sid=20250717103345
106•gslin•8h ago•48 comments

Simulating Hand-Drawn Motion with SVG Filters

https://camillovisini.com/coding/simulating-hand-drawn-motion-with-svg-filters
127•camillovisini•3d ago•13 comments

EU commissioner shocked by dangers of some goods sold by Shein and Temu

https://www.theguardian.com/business/2025/jul/20/eu-commissioner-shocked-dangerous-goods-sold-shein-temu
57•Michelangelo11•6h ago•56 comments

Coding with LLMs in the summer of 2025 – an update

https://antirez.com/news/154
422•antirez•15h ago•297 comments

Peep Show – The Most Realistic Portrayal of Evil Ever Made (2020)

https://mattlakeman.org/2020/01/22/peep-show-the-most-realistic-portrayal-of-evil-ive-ever-seen/
69•Michelangelo11•5h ago•20 comments

Slow Motion Became Cinema's Dominant Special Effect

https://newrepublic.com/article/196262/slow-motion-became-cinema-dominant-special-effect-downtime
4•cainxinth•3d ago•0 comments

What birdsong and back ends can teach us about magic

https://digitalseams.com/blog/what-birdsong-and-backends-can-teach-us-about-magic
19•nkurz•2h ago•7 comments

What My Mother Didn't Talk About (2020)

https://www.buzzfeednews.com/article/karolinawaclawiak/what-my-mother-didnt-talk-about-karolina-waclawiak
41•NaOH•3d ago•11 comments

IPv6 Based Canvas

https://canvas.openbased.org/
21•tylermarques•4h ago•0 comments

FFmpeg devs boast of another 100x leap thanks to handwritten assembly code

https://www.tomshardware.com/software/the-biggest-speedup-ive-seen-so-far-ffmpeg-devs-boast-of-another-100x-leap-thanks-to-handwritten-assembly-code
179•harambae•5h ago•62 comments

Speeding up my ZSH shell

https://scottspence.com/posts/speeding-up-my-zsh-shell
141•saikatsg•10h ago•68 comments

SIOF (Scheme in One File) – A Minimal R7RS Scheme System

https://github.com/false-schemers/siof
13•gjvc•1d ago•0 comments

Why not to use iframes for embedded dashboards

https://embeddable.com/blog/iframes-for-embedding
12•rogansage•2d ago•7 comments

Subreply – an open source text-only social network

https://github.com/lucianmarin/subreply
62•lcnmrn•8h ago•41 comments

Show HN: Conductor, a Mac app that lets you run a bunch of Claude Codes at once

https://conductor.build/
129•Charlieholtz•3d ago•60 comments

JOVE – Jonathan’s Own Version of Emacs (1983)

https://github.com/jonmacs/jove/
40•nanna•3d ago•24 comments

Discovering what we think we know is wrong

https://www.science.org/content/blog-post/tell-me-again-about-neurons-now
19•strangattractor•2d ago•12 comments

Logical implication is a comparison operator

https://btdmaster.bearblog.dev/logical-implication-as-comparison/
13•btdmaster•3d ago•3 comments

Computational complexity of neural networks (2022)

https://lunalux.io/introduction-to-neural-networks/computational-complexity-of-neural-networks/
11•mathattack•2h ago•1 comments

Digital vassals? French Government 'exposes citizens' data to US'

https://brusselssignal.eu/2025/07/digital-vassals-french-government-exposes-citizens-data-to-us/
188•ColinWright•15h ago•81 comments

Insights on Teufel's First Open-Source Speaker

https://blog.teufelaudio.com/visionary-mynds-insights-on-teufels-first-open-source-speaker/
76•lis•9h ago•13 comments

Tough news for our UK users

https://blog.janitorai.com/posts/3/
250•airhangerf15•6h ago•216 comments