These are quants for an experimental model.
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
Original model weights:
https://huggingface.co/Nitral-AI/Eris_PrimeV4-Vision-7B
Vision/multimodal capabilities:
If you want to use vision functionality:
- Make sure you are using the latest version of KoboldCpp.
To use the multimodal capabilities of this model, such as vision, you also need to load the specified mmproj file, you can get it here, it's also hosted in this repository inside the mmproj folder.
- You can load the mmproj by using the corresponding section in the interface:
- For CLI users, you can load the mmproj file by adding the respective flag to your usual command:
--mmproj your-mmproj-file.gguf
Quantization information:
Steps performed:
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
Using the latest llama.cpp at the time.
- Downloads last month
- 339