Inference:
./llama-qwen2vl-cli -m Q8_0.gguf --mmproj qwen2vl-vision.gguf -p "Describe this image." --image "demo.jpg"
Converted using this Colab Notebook:
Special Thanks to:
HimariO for the excellent work on enabling quantization for Qwen2-VL! PR on GitHub
- Downloads last month
- 82
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.