Mistral Small 4 119B (2603)
flagshipMistral Small 4 119B (March 2026) is an open-weight Mixture-of-Experts model from Mistral AI, featuring 119 billion total parameters with efficient expert routing. As part of the Mistral Small family, it delivers strong general-purpose capabilities while being optimized for cost-efficient inference.
The model uses a MoE architecture where only a fraction of parameters are active per token, enabling high-quality outputs at lower computational cost than dense models of equivalent total size. It supports multilingual text generation, coding, reasoning, and instruction following with competitive benchmark performance.
Mistral Small 4 119B offers an excellent balance of quality and efficiency for production deployments, providing near-frontier quality at a fraction of the cost of larger models.
Providers for Mistral Small 4 119B (2603)
2 routes · sorted by uptimeClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.