April 16, 2026Open SourceCodingAgents

Qwen3.6 punches above its weight

Alibaba shipped Qwen3.6-35B-A3B this week, and on agentic coding it beats Gemma4-31B on basically every benchmark that matters. Open source.

The A3B trick is the important bit. 35 billion parameters total, but only 3 billion active per token via MoE. You get the economics of a small model and the capacity of a much bigger one. On Terminal-Bench 2.0 — agentic terminal coding, the thing where the model has to run commands and actually drive a shell — it scores 51.5. Gemma4-31B scores 42.9. On SWE-bench Pro, Qwen3.6 hits 49.5, Gemma4 sits at 35.7.

Context length is 262K native, extensible to over 1M. The model also now retains reasoning context across historical messages, the kind of small detail that makes iterative coding actually usable. Agentic workflows compound — every token saved per turn adds up over a 30-turn repo refactor.

The open-to-all framing is the real message. Qwen has been positioning itself as the default open alternative to closed agentic coders. With Kimi, DeepSeek, GLM, and now Qwen3.6 all shipping competitive agentic coding models in 2026, the Chinese open-weight stack has become a legitimate parallel track to Anthropic and OpenAI. Unsloth already has GGUF quantizations out, so you can run it locally today.

Model: https://huggingface.co/Qwen/Qwen3.6-35B-A3B
← Previous
Claude Opus 4.7 is here
Next →
Codex just grew a body
← Back to all articles

Comments

Loading...
>_