--- license: llama3.2 inference: false base_model: meta-llama/Llama-3.2-11B-Vision-Instruct base_model_relation: quantized tags: - green - p11 - llmware-vision - ov - emerald --- # llama-11b-vision-instruct-ov **llama-11b-vision-instruct-ov** is an OpenVino int4 quantized version of Llama 3.2 11B Vision Instruct, providing a small, fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU. [**llama-11b-vision-instruct**](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) is a new 11B multi-modal instruct chat foundation model from Meta. ### Model Description - **Developed by:** meta-llama - **Quantized by:** llmware - **Model type:** llama-3.2 - **Parameters:** 11 billion - **Model Parent:** meta-llama/Llama-3.2-11B-Vision-Instruct - **Language(s) (NLP):** English - **License:** Llama 3.2 Community License - **Uses:** Multimodal (image+text -> text) - **RAG Benchmark Accuracy Score:** NA - **Quantization:** int4 For a reference open source implementation, please see this [Intel OpenVino Notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/mllama-3.2) ## Model Card Contact [llmware on github](https://www.github.com/llmware-ai/llmware) [llmware on hf](https://www.huggingface.co/llmware) [llmware website](https://www.llmware.ai)