Quantized Model config file
#23
by
endeavorXx
- opened
File "/miniconda3/envs/pix2/lib/python3.11/site-packages/vllm/config.py", line 3114, in post_init
self.quant_config = VllmConfig._get_quantization_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/pix2/lib/python3.11/site-packages/vllm/config.py", line 3058, in _get_quantization_config
quant_config = get_quant_config(model_config, load_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/pix2/lib/python3.11/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 191, in get_quant_config
raise ValueError(
ValueError: Cannot find the config file for gptq
How to fix this, I checked it does not even work with awq quantization?