Skip to content
NVIDIA

Nemotron 3 Content Safety

flagship
NVIDIA · released 2024-06-01 · text
currently routing · 4.2k rpm
4.1M tokens
Context
— / 1M
Input
— / 1M
Output
— t/s
Speed
open
License
/ ABOUT

NVIDIA Nemotron 3 Content Safety is a classification model for detecting unsafe, harmful, or policy-violating content in text. Part of the Nemotron 3 family, it categorizes text across multiple safety dimensions including violence, hate speech, sexual content, self-harm, and harassment.

The model provides granular category-level scores rather than simple safe/unsafe binary decisions, enabling nuanced content moderation policies. It was trained on diverse safety datasets and handles multiple languages, making it suitable for global content moderation.

Nemotron 3 Content Safety is designed for integration into AI platforms, social media applications, and content pipelines that need automated safety screening.

Providers for Nemotron 3 Content Safety

1 routes · sorted by uptime

ClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.

Provider
Context
Quant
Uptime · 30d
bf16
0.00%