Minzhi Huang
ChloeHuang1
·
AI & ML interests
None yet
Recent Activity
new activity
8 days ago
Valdemardi/DeepSeek-R1-Distill-Qwen-32B-AWQ:Can this model use with VLLM?
Organizations
None yet
ChloeHuang1's activity
VLLM with error Blockwise quantization only supports 16/32-bit floats, but got torch.uint8
6
#3 opened 10 days ago
by
ChloeHuang1
Can this model use with VLLM?
3
#2 opened 8 days ago
by
ChloeHuang1
VLLM with error Blockwise quantization only supports 16/32-bit floats, but got torch.uint8
6
#3 opened 10 days ago
by
ChloeHuang1
VLLM with error Blockwise quantization only supports 16/32-bit floats, but got torch.uint8
6
#3 opened 10 days ago
by
ChloeHuang1