llama-11b-vision-instruct-ov

llama-11b-vision-instruct-ov is an OpenVino int4 quantized version of Llama 3.2 11B Vision Instruct, providing a small, fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.

llama-11b-vision-instruct is a new 11B multi-modal instruct chat foundation model from Meta.

Model Description

  • Developed by: meta-llama
  • Quantized by: llmware
  • Model type: llama-3.2
  • Parameters: 11 billion
  • Model Parent: meta-llama/Llama-3.2-11B-Vision-Instruct
  • Language(s) (NLP): English
  • License: Llama 3.2 Community License
  • Uses: Multimodal (image+text -> text)
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

For a reference open source implementation, please see this Intel OpenVino Notebook

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
14
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for llmware/llama-11b-vision-instruct-ov

Quantized
(17)
this model

Collection including llmware/llama-11b-vision-instruct-ov