--- license: apache-2.0 inference: false base_model: Qwen/Qwen2-VL-7B-Instruct base_model_relation: quantized tags: [green, llmware-vision, p7, ov, emerald] --- # qwen2-vl-7b-instruct-ov **qwen2-vl-7b-instruct-ov** is an OpenVino int4 quantized version of [Qwen2-VL-7B-Instruct](https://www.huggingface.co/Qwen/Qwen2-VL-7B-Instruct), providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU. This is a multi-modal vision-to-text model from the Qwen2 release series from Qwen (summer 2024), and is a very high-quality innovative model that accepts multi-modal inputs (image/video, text). ### Model Description - **Developed by:** Qwen - **Quantized by:** llmware - **Model type:** qwen2-vl - **Parameters:** 7 billion - **Model Parent:** Qwen/Qwen2-VL-7B-Instruct - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Uses:** Multi-Modal LLM - **Quantization:** int4 For an open source inference implementation, please see this [Intel OpenVino notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/qwen2-vl) ## Model Card Contact [llmware on github](https://www.github.com/llmware-ai/llmware) [llmware on hf](https://www.huggingface.co/llmware) [llmware website](https://www.llmware.ai)