I'm interested in adding WebGPU support for a distributed ML project, so I wondered how much faster it is than CPU. I asked Claude to build this, then add export functionality. You can use it to see how much faster inference can be using WebGPU on your system. You can download the results as a .csv file if you'd like to save them.