Qwen3.6-27B: A 27B Dense Model That Beats Last Year's 397B Flagship
Alibaba's Qwen team dropped Qwen3.6-27B on April 22, Apache 2.0, and the headline number is brutal. SWE-bench Verified hits 77.2 in 27B dense parameters. Last generation's flagship Qwen3.5-397B-A17B (807GB on disk) gets surpassed by this 55.6GB model on every major coding benchmark. Terminal-Bench 2.0 jumps to 59.3, matching Claude 4.5 Opus. SkillsBench Avg5 leaps from 27.2 to 48.2.
Native context is 262,144 tokens, scalable to 1,010,000 with YaRN. Architecture is hybrid Gated DeltaNet plus Gated Attention with multi-token prediction support. The interesting agent feature is preserve_thinking: across multi-turn conversations the reasoning context survives, which kills the redundant chain-of-thought rebuilds that eat tokens in long agentic loops.
The meta-story is the dense vs MoE pendulum swinging back. Dense 27B you can serve on a single 8xH100 node. MoE 397B you cannot. For self-hosters, agent shops, and anyone who wants flagship coding behavior without renting a cluster, this is the new default open-weight pick. Vision is included too, with 82.9 MMMU and 70.3 AndroidWorld. Source: https://qwen.ai/blog?id=qwen3.6-27b Β· Weights: https://huggingface.co/Qwen/Qwen3.6-27B
← Back to all articles
Native context is 262,144 tokens, scalable to 1,010,000 with YaRN. Architecture is hybrid Gated DeltaNet plus Gated Attention with multi-token prediction support. The interesting agent feature is preserve_thinking: across multi-turn conversations the reasoning context survives, which kills the redundant chain-of-thought rebuilds that eat tokens in long agentic loops.
The meta-story is the dense vs MoE pendulum swinging back. Dense 27B you can serve on a single 8xH100 node. MoE 397B you cannot. For self-hosters, agent shops, and anyone who wants flagship coding behavior without renting a cluster, this is the new default open-weight pick. Vision is included too, with 82.9 MMMU and 70.3 AndroidWorld. Source: https://qwen.ai/blog?id=qwen3.6-27b Β· Weights: https://huggingface.co/Qwen/Qwen3.6-27B
Comments