This is huge news, can't believe it hasn't hit the front page yet! It's worth pointing out that this is a non-reasoning model.
> Benchmarks show it beats our previous best, Qwen3-235B-A22B-2507
QWen chat describes the model as following, "Qwen3-Max-Preview is the most advanced language model in the Qwen series, excelling in complex reasoning, instruction following, mathematics, coding, role-playing, creative writing, and more."
> Maximum context length: 262,144 tokens
> Maximum generation length: 32,768 tokens
Compared to Qwen3-235B-A22B-2507 which has
> Maximum context length: 131,072 tokens
> Max summary generation length: 8,192 tokens
So 2x the amount of maximum context on previous model and 4x more max summary generation.
I've been using Qwen3-235B-A22B-2507 as my daily on Qwen chat since it got released and I can say that I've been very satisfied. Especially because Alibabas generous offers on how much I can use their service for free, no timeout, no rate limit, nothing. And for larger content, Qwen2.5-14B-Instruct-1M, amazing model. And they actually benchmark against the beasts, like Opus 4 (non thinking), Kimi K2 and Deepseek V3.1.
Can't wait for the new benchmarks at Artificial analysis to see how it actually compares against the other models, especially Gpt-5, Gemini-2.5-pro and Grok-4
Alibaba has been pushing it lately, incredible work. They deserve way more hype and highlight than what they currently get!
Alifatisk•5mo ago
> Benchmarks show it beats our previous best, Qwen3-235B-A22B-2507
QWen chat describes the model as following, "Qwen3-Max-Preview is the most advanced language model in the Qwen series, excelling in complex reasoning, instruction following, mathematics, coding, role-playing, creative writing, and more."
> Maximum context length: 262,144 tokens
> Maximum generation length: 32,768 tokens
Compared to Qwen3-235B-A22B-2507 which has
> Maximum context length: 131,072 tokens
> Max summary generation length: 8,192 tokens
So 2x the amount of maximum context on previous model and 4x more max summary generation.
I've been using Qwen3-235B-A22B-2507 as my daily on Qwen chat since it got released and I can say that I've been very satisfied. Especially because Alibabas generous offers on how much I can use their service for free, no timeout, no rate limit, nothing. And for larger content, Qwen2.5-14B-Instruct-1M, amazing model. And they actually benchmark against the beasts, like Opus 4 (non thinking), Kimi K2 and Deepseek V3.1.
Can't wait for the new benchmarks at Artificial analysis to see how it actually compares against the other models, especially Gpt-5, Gemini-2.5-pro and Grok-4
Alibaba has been pushing it lately, incredible work. They deserve way more hype and highlight than what they currently get!