Every couple of years I refresh my own parallel reduction benchmarks (https://github.com/ashvardanian/ParallelReductionsBenchmark), which are also memory-bound. Mine mostly focus on the boring simple throughput-maximizing cases on CPUs and GPUs.
Lately, as GPUs are pulled into more general data-processing tasks, I keep running into non-coalesced, pointer-chasing patterns — but I still don’t have a good mental model for estimating the cost of different access strategies. A crossover between these two topics — running MLP-style loads on GPUs — might be exactly the benchmark missing, in case someone is looking for a good weekend project!
Lerc•7h ago
It really seemrd like there was more to be said there.
elteto•6h ago
Lerc•5h ago
I think the expectation of more comes from the experience of predominantly encountering articles with a different form.