It's a drop-in replacement for nn.Linear (CurvatureTuner) and works out-of-the-box on MLPs. Planning Transformer tests next.Benchmark on ~400k param MLP (synthetic data, 5 epochs): Baseline: Train 2.45s / Inf 0.0045s Enhanced: Eff params ~281k (40% reduction) / Train 1.55s (1.58x) / Inf 0.0022s (2.05x)
Repo (MIT): https://github.com/wwes4/AI_Accel_1.5x Feedback, forks, and real-dataset tests very welcome! Inspired by unconventional efficiency ideas.