I don’t get why they didn’t compare against BLIS. I know you can only do so many benchmarks, and people will often complain no matter what, but BLIS is the obvious comparison. Maybe BLIS doesn’t have kernels for their platform, but they’d be well served by just mentioning that fact to get that question out of the reader’s head.
BLIS even has mixed precision interfaces. But might not cover more exotic stuff like low-precision ints? So this paper could have had a chance to “put some points on the board” against a real top-tier competitor.
my123•50m ago
Section VII.3 has:
> Libraries such as BLIS [19] lack SME
support and are therefore excluded from comparison.
Maybe you want a comparison anyways, but it won't be competitive. On Apple CPUs, SME is ~8x faster than a single regular CPU core with a good BLAS library.
anematode•1m ago
ARM SME as implemented on the Apple M4 is quite interesting. Super useful for matrix math (as this paper illustrates well), but my attempts at using the SSVE extension for vector math were an utter failure for performance, despite the increased vector width (512 bits vs. 128 bits for NEON). Potentially the switch into/out of streaming mode is too expensive, but my microbenchmarks indicated the SSVE instructions themselves just didn't have great throughput.
bee_rider•58m ago
BLIS even has mixed precision interfaces. But might not cover more exotic stuff like low-precision ints? So this paper could have had a chance to “put some points on the board” against a real top-tier competitor.
my123•50m ago
> Libraries such as BLIS [19] lack SME support and are therefore excluded from comparison.
bee_rider•40m ago
dsharlet•47m ago
Maybe you want a comparison anyways, but it won't be competitive. On Apple CPUs, SME is ~8x faster than a single regular CPU core with a good BLAS library.