Skip to content
NVIDIA

Nemotron 3 Super 120B

flagship
NVIDIA · released 2025-03-18 · text->text
currently routing · 4.2k rpm
262K tokens
Context
$0.09 / 1M
Input
$0.45 / 1M
Output
— t/s
Speed
open
License
/ ABOUT

NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in complex multi-agent applications. Built on a hybrid Mamba-Transformer Mixture-of-Experts architecture with multi-token prediction (MTP), it delivers over 50% higher token generation compared to leading open models. The model features a 1M token context window for long-term agent coherence, cross-document reasoning, and multi-step task planning. Latent MoE enables calling 4 experts for the inference cost of only one, improving intelligence and generalization. Multi-environment RL training across 10+ environments delivers leading accuracy on benchmarks including AIME 2025, TerminalBench, and SWE-Bench Verified. Fully open with weights, datasets, and recipes under the NVIDIA Open License, Nemotron 3 Super allows easy customization and secure deployment anywhere — from workstation to cloud.

BENCHMARKS Artificial Analysis Index
Intelligence 36
Coding 31.2
Agentic 40.2

Providers for Nemotron 3 Super 120B

4 routes · sorted by uptime

ClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.

Provider
Context
Quant
Uptime · 30d
262K
bf16
0.00%
262K
bf16
0.00%
262K
bf16
0.00%
262K
bf16
0.00%