Skip to content
NVIDIA

Nemotron Nano 12B V2 VL

flagship
NVIDIA · released 2025-10-01 · text+image+video->text
currently routing · 4.2k rpm
131K tokens
Context
$0.20 / 1M
Input
$0.60 / 1M
Output
— t/s
Speed
open
License
/ ABOUT

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency.

The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets optimized for optical-character recognition, chart reasoning, and multimodal comprehension.

Nemotron Nano 2 VL achieves leading results on OCRBench v2 and scores ≈ 74 average across MMMU, MathVista, AI2D, OCRBench, OCR-Reasoning, ChartQA, DocVQA, and Video-MME—surpassing prior open VL baselines. With Efficient Video Sampling (EVS), it handles long-form videos while reducing inference cost.

Open-weights, training data, and fine-tuning recipes are released under a permissive NVIDIA open license, with deployment supported across NeMo, NIM, and major inference runtimes.

BENCHMARKS Artificial Analysis Index
Intelligence 14.9
Coding 11.8
Agentic 7.1

Providers for Nemotron Nano 12B V2 VL

2 routes · sorted by uptime

ClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.

Provider
Context
Quant
Uptime · 30d
131K
bf16
0.00%
131K
bf16
0.00%