100% secure way to test proprietary sparse AI kernel (ROLV) on any hardware
We just released a public Verification Kit that lets anyone test ROLV on *their own hardware* — laptop, on-prem cluster, H100, MI300X, custom ASIC, phone, whatever — with zero risk to our IP.
How it works:
1. You run one tiny open-source script (zero ROLV code inside)
2. It generates a deterministic test matrix (default: 20k×20k @ 70% sparsity, batch 5000, 1000 iters)
3. You get a small JSON with baseline timing + cryptographic hashes + your hardware details
4. Send the JSON to us
5. We run the full proprietary ROLV harness on the exact same inputs and send you a professional comparison report (speedup, energy savings, tokens/sec, FLOPs, correctness hashes)
Everything is reproducible and verifiable. No code, no weights, no IP ever leaves our side.
Full benchmarks, methodology, and the official PDF are here:
https://rolv.ai
The Verification Kit itself (one file, works everywhere)
You can change any parameter with flags:
python rolv-verifier.py --N 16384 --zeros 0.92 --batch 2048 --iters 500
Would love feedback from people running it on different chips (especially large matrices). If you try it, drop your results or questions below.
(ROLV is a new sparse compute primitive — more details on the site)
heggenhougen•2h ago
We just released a public Verification Kit that lets anyone test ROLV on *their own hardware* — laptop, on-prem cluster, H100, MI300X, custom ASIC, phone, whatever — with zero risk to our IP.
How it works: 1. You run one tiny open-source script (zero ROLV code inside) 2. It generates a deterministic test matrix (default: 20k×20k @ 70% sparsity, batch 5000, 1000 iters) 3. You get a small JSON with baseline timing + cryptographic hashes + your hardware details 4. Send the JSON to us 5. We run the full proprietary ROLV harness on the exact same inputs and send you a professional comparison report (speedup, energy savings, tokens/sec, FLOPs, correctness hashes)
Everything is reproducible and verifiable. No code, no weights, no IP ever leaves our side.
Full benchmarks, methodology, and the official PDF are here: https://rolv.ai
The Verification Kit itself (one file, works everywhere)
You can change any parameter with flags: python rolv-verifier.py --N 16384 --zeros 0.92 --batch 2048 --iters 500
Would love feedback from people running it on different chips (especially large matrices). If you try it, drop your results or questions below.
(ROLV is a new sparse compute primitive — more details on the site)