March 21, 2026Open SourceResearchRL

NVIDIA Releases Nemotron-Cascade 2: Open 30B Agentic Model with 3B Active Parameters

NVIDIA has released Nemotron-Cascade 2, an open-weight 30B Mixture-of-Experts model with only 3B activated parameters, delivering strong reasoning and agentic capabilities at a fraction of typical frontier model size.

Nemotron-Cascade 2 is the second open-weight LLM to achieve Gold Medal-level performance on the 2025 International Mathematical Olympiad (IMO), International Olympiad in Informatics (IOI), and ICPC World Finals. It outperforms the larger Nemotron-3-Super-120B-A12B and Qwen3.5-35B-A3B on mathematics, code reasoning, and instruction following benchmarks.

The model was trained using Cascade RL — a sequential, domain-wise reinforcement learning approach that prevents catastrophic forgetting across reasoning domains. The pipeline includes specialized stages for code, SWE tasks, instruction following, and long-context processing. For agent tasks, it supports structured tool-calling protocols within system prompts.

Model weights, training data (including 125K agentic + 389K agentless SWE samples), and the full technical report are released under an open license on Hugging Face.

Hugging Face: https://huggingface.co/nvidia/Nemotron-Cascade-2-30B-A3B
Research page: https://research.nvidia.com/labs/nemotron/nemotron-cascade-2/
← Previous
OpenCode: Open-Source AI Coding Agent Surges to 120K Stars
Next →
Eragon Raises $12M Seed to Replace Enterprise Software with Agentic AI Prompts
← Back to all articles
>_