One of the HF pages outlines it a bit more clearly:
> We are excited to introduce Qwen-Scope, an interpretability module trained on the Qwen3 and Qwen3.5 series models. Specifically, we integrated and trained Sparse Autoencoders (SAEs) within Qwen’s hidden layers. By implementing sparsity constraints, we can automatically extract data features that are highly decoupled, low-redundancy, and significantly more interpretable. Qwen-Scope can be used not only to analyze the internal mechanisms of Qwen’s behavior but also holds immense potential for model optimization. Application scenarios include steerable inference control, evaluation sample distribution analysis and comparison, data classification and synthesis, and model training and optimization. https://huggingface.co/Qwen/SAE-Res-Qwen3.5-27B-W80K-L0_50
embedding-shape•1h ago
One of the HF pages outlines it a bit more clearly:
> We are excited to introduce Qwen-Scope, an interpretability module trained on the Qwen3 and Qwen3.5 series models. Specifically, we integrated and trained Sparse Autoencoders (SAEs) within Qwen’s hidden layers. By implementing sparsity constraints, we can automatically extract data features that are highly decoupled, low-redundancy, and significantly more interpretable. Qwen-Scope can be used not only to analyze the internal mechanisms of Qwen’s behavior but also holds immense potential for model optimization. Application scenarios include steerable inference control, evaluation sample distribution analysis and comparison, data classification and synthesis, and model training and optimization. https://huggingface.co/Qwen/SAE-Res-Qwen3.5-27B-W80K-L0_50