While working on new ML architectures I struggled to stabilize training by using countless learning-rate schedulers, gradient clippers and normalizers enough to go and implement a schedule-free optimizer.
Here, Lion Schedule-Free optimizer - a version of Lion optimizer that requires no learning-rate scheduler. It uses sign agreement - an absolute value of cross correlation between momentum sign and gradient sign, to scale the effective update step. Not only it converges 3x times faster ON MY MODEL, by eliminating LR scheduler it also allows for hot training resume & restart. And also stabilizes training, especially late training, eliminating the need for gradient clipping, etc. The effective update depends on the training regime and can decrease or increase during training.
In this implementation, the sign agreement is calculated per-module. It's probably more logical and stable to calculate it per-parameter-group, but that's more code and since module-wise already works pretty well...
The optimizer is provided as is. There will be no paper, no convergence guarantees, no ablation studies and no time to do any of that.
optimizer = LionSF(model.parameters(), lr=5e-4, betas=(0.9, 0.99), weight_decay=1e-2)
Give it a generous base learning rate, like 5e-4 or more, and ditch LR scheduler completely. You can also ditch gradient clipping (as I did).
If you want to resume / restart training later from a checkpoint - keep the optimizer state, do a hot-restart. There is no need to warm-up - it will restart gently naturally. The ability to do a hot-restart and increased training stability is probably more important (for me) than even faster convergence, although faster convergence looks better on plots.
quantosaurus•2h ago
Here, Lion Schedule-Free optimizer - a version of Lion optimizer that requires no learning-rate scheduler. It uses sign agreement - an absolute value of cross correlation between momentum sign and gradient sign, to scale the effective update step. Not only it converges 3x times faster ON MY MODEL, by eliminating LR scheduler it also allows for hot training resume & restart. And also stabilizes training, especially late training, eliminating the need for gradient clipping, etc. The effective update depends on the training regime and can decrease or increase during training. In this implementation, the sign agreement is calculated per-module. It's probably more logical and stable to calculate it per-parameter-group, but that's more code and since module-wise already works pretty well...
The optimizer is provided as is. There will be no paper, no convergence guarantees, no ablation studies and no time to do any of that.
Install it:
pip install git+https://github.com/govorunov/lion-sf.git
And use it as normal optimizer:
from lion_pytorch import LionSF
optimizer = LionSF(model.parameters(), lr=5e-4, betas=(0.9, 0.99), weight_decay=1e-2) Give it a generous base learning rate, like 5e-4 or more, and ditch LR scheduler completely. You can also ditch gradient clipping (as I did).
If you want to resume / restart training later from a checkpoint - keep the optimizer state, do a hot-restart. There is no need to warm-up - it will restart gently naturally. The ability to do a hot-restart and increased training stability is probably more important (for me) than even faster convergence, although faster convergence looks better on plots.