Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Disclaimer: I don't know what I'm doing.

Original Model: https://huggingface.co/Qwen/QwQ-32B

QwQ 32B EXL2 Size
8.0bpw 33.5 GB
7.0bpw 29.6 GB
6.5bpw 27.5 GB
6.0bpw 25.6 GB
5.5bpw 23.6 GB
5.0bpw 21.7 GB
4.5bpw 19.7 GB
4.0bpw 17.8 GB
3.75bpw 16.8 GB
3.5bpw WIP
Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cshared/Qwen-QwQ-32B-5.5bpw-exl2

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Quantized
(147)
this model

Collection including cshared/Qwen-QwQ-32B-5.5bpw-exl2