This model is converted from zai-org/GLM-4.5V to GGUF using convert_hf_to_gguf.py
convert_hf_to_gguf.py
To use it:
llama-server -hf ggml-org/GLM-4.5V-GGUF
Chat template
4-bit
Base model