Skip to content
Mistral

Mistral Small 4 119B (2603)

flagship
Mistral · released 2026-03-01 · text
currently routing · 4.2k rpm
128M tokens
Context
— / 1M
Input
— / 1M
Output
— t/s
Speed
open
License
/ ABOUT

Mistral Small 4 119B (March 2026) is an open-weight Mixture-of-Experts model from Mistral AI, featuring 119 billion total parameters with efficient expert routing. As part of the Mistral Small family, it delivers strong general-purpose capabilities while being optimized for cost-efficient inference.

The model uses a MoE architecture where only a fraction of parameters are active per token, enabling high-quality outputs at lower computational cost than dense models of equivalent total size. It supports multilingual text generation, coding, reasoning, and instruction following with competitive benchmark performance.

Mistral Small 4 119B offers an excellent balance of quality and efficiency for production deployments, providing near-frontier quality at a fraction of the cost of larger models.

BENCHMARKS Artificial Analysis Index
Intelligence 28

Providers for Mistral Small 4 119B (2603)

2 routes · sorted by uptime

ClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.

Provider
Context
Quant
Uptime · 30d
bf16
0.00%
bf16
0.00%