Introduce Radial Attention — a static sparse attention mechanism with O(nlogn) complexity for long video generation! Here are some key features:
* Plug-and-play: works with pretrained models like Wan, HunyuanVideo, Mochi
* Speeds up both training&inference by 2–4×, without quality loss
* Compatible with pre-trained LoRAs. When applied to 8-step FusionX LoRA, Radial Attention further delivers a 1.6× speedup
lmxyy•6h ago