gpt-oss-120b
flagshipcurrently routing · 4.2k rpm
131K tokens
Context
open
License
/ ABOUT
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
BENCHMARKS Artificial Analysis Index
Intelligence 33.3
Coding 28.6
Agentic 37.9
Providers for gpt-oss-120b
9 routes · sorted by uptimeClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.
Provider
Context
Quant
Uptime · 30d