First, Adaptive Tool Calling: No manual prompts are needed—it autonomously invokes search engines, memory tools, and code interpreters based on task demands. This cuts down on hallucinations and boosts real-time problem-solving; for instance, coding tasks trigger automatic error correction loops, while research tasks combine search with context synthesis. Second, Test-Time Scaling (TTS): It outperforms standard parallel sampling by refining reasoning through iterative insights, with measurable jumps in key benchmarks—GPQA rose from 90.3 to 92.8, LiveCodeBench v6 hit 91.4 from 88.0, and IMO-AnswerBench climbed to 91.5 from 89.5.
Notably, its preview version even achieved 100% accuracy in tough math contests like AIME 25 and HMMT 25. The model runs smoothly on web/desktop demos, and its API is production-ready with adjustable thinking budgets (up to 80K tokens by default) to balance depth and speed. This isn’t just an incremental update—it’s a leap that closes the gap in reasoning and tool integration for real-world academic and engineering tasks.
Check it out: https://chat.qwen.ai/
imovie4•1h ago