The model is quantized using https://github.com/WanBenLe/AutoAWQ-with-llava-v1.6.git
The source model is llava-hf/llava-v1.6-mistral-7b-hf
- Downloads last month
- 21
Inference API (serverless) does not yet support transformers models for this pipeline type.