Skip to content
Meta

Llama 3.2 11B Vision

flagship
Meta · released 2024-09-18 · text+image->text
currently routing · 4.2k rpm
131K tokens
Context
$0.24 / 1M
Input
$0.24 / 1M
Output
— t/s
Speed
open
License
/ ABOUT

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.

Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.

Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md).

Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

BENCHMARKS Artificial Analysis Index
Intelligence 8.7
Coding 4.3
Agentic 4.9

Providers for Llama 3.2 11B Vision

4 routes · sorted by uptime

ClosedRouter routes requests to the providers best able to handle your prompt size and parameters, with automatic fallbacks to maximize uptime.

Provider
Context
Quant
Uptime · 30d
131K
bf16
0.00%
131K
bf16
0.00%
131K
bf16
0.00%
131K
bf16
0.00%