Skip to content
Mistral

Mistral Moderation

flagship
Mistral · released 2025-01-01 · text
currently routing · 4.2k rpm
8.2M tokens
Context
— / 1M
Input
— / 1M
Output
— t/s
Speed
open
License
/ ABOUT

Mistral Moderation is a content moderation model from Mistral AI designed to classify text content for safety and policy compliance. It detects categories including hate speech, violence, sexual content, self-harm, harassment, and other potentially harmful content, providing structured moderation decisions.

The model supports multiple languages and provides granular category-level scores, enabling applications to implement nuanced moderation policies rather than simple binary safe/unsafe decisions. It can be used for both input screening and output filtering in AI systems.

Mistral Moderation is essential for production AI deployments that need to ensure content safety, regulatory compliance, and brand protection in user-facing applications.

Providers for Mistral Moderation

1 routes · sorted by uptime

ClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.

Provider
Context
Quant
Uptime · 30d
bf16
0.00%