QWEN25 VL
is it possible to have GGUF?
Qwen/Qwen2.5-VL-7B-Instruct
or
nomic-ai/nomic-embed-vision-v1.5
or
deepseek-ai/Janus-Pro-7B
and if not you have an idea how to use it?
The first two are queued, the last is not supported by llama.cpp, so no gguf possible. You can check on the progress of the other two at http://hf.tst.eu/status.html
Unfortunately, Qwen2.5-VL-7B-Instruct is not supported by llama.cpp (it has an architecture of Qwen2_5_VLForConditionalGeneration)
And nomic siimilarly is not supported by llama.cpp (NomicVisionModel)
Wow, that's a complete failure ln llama.cpp's side.
but how can noimc make an GGUF (its their secret? ^^)
https://huggingface.co/nomic-ai/nomic-embed-text-v1-GGUF
and
why it give from llava -> it is also a kinde of vision model (MLLM)
https://huggingface.co/liuhaotian/llava-v1.5-7b
the gguf
https://huggingface.co/second-state/Llava-v1.5-7B-GGUF
is that such as different?
on of those kind you have made
https://huggingface.co/mradermacher/llava-v1.5-7b-hf-vicuna-GGUF
but how can noimc make an GGUF (its their secret? ^^)
I don't know. Maybe it was supported in a special version of llama.cpp. Or, what I suspect, they really didn't make ggufs of that model, but of something else (such as a modified model that is supported).
why it give from llava -> it is also a kinde of vision model (MLLM)
different models differ, indeed. llama.cpp only supports a relatively small subset of model types, and each one, of course, must be supported.