Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Qwen
/
Qwen2-VL-72B-Instruct-AWQ
like
27
Follow
Qwen
1,728
Image-Text-to-Text
Safetensors
English
qwen2_vl
multimodal
conversational
4-bit precision
awq
arxiv:
2409.12191
arxiv:
2308.12966
License:
tongyi-qianwen
Model card
Files
Files and versions
Community
3
Use this model
main
Qwen2-VL-72B-Instruct-AWQ
4 contributors
History:
8 commits
yangapku
fix(ckpt) fix corrupted ckpt file
712d5a5
about 1 month ago
.gitattributes
Safe
1.52 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
LICENSE
Safe
6.96 kB
Create LICENSE
about 2 months ago
README.md
Safe
18.9 kB
Update README.md
about 2 months ago
added_tokens.json
Safe
392 Bytes
Upload folder using huggingface_hub
about 2 months ago
chat_template.json
Safe
1.05 kB
Upload folder using huggingface_hub
about 2 months ago
config.json
Safe
1.33 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
generation_config.json
Safe
227 Bytes
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
merges.txt
Safe
1.67 MB
Upload folder using huggingface_hub
about 2 months ago
model-00001-of-00011.safetensors
Safe
3.97 GB
LFS
fix(ckpt) fix corrupted ckpt file
about 1 month ago
model-00002-of-00011.safetensors
Safe
3.91 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00003-of-00011.safetensors
Safe
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00004-of-00011.safetensors
Safe
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00005-of-00011.safetensors
Safe
3.91 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00006-of-00011.safetensors
Safe
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00007-of-00011.safetensors
Safe
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00008-of-00011.safetensors
Safe
3.91 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00009-of-00011.safetensors
Safe
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00010-of-00011.safetensors
Safe
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model-00011-of-00011.safetensors
Safe
3.33 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
model.safetensors.index.json
Safe
209 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
about 1 month ago
preprocessor_config.json
Safe
594 Bytes
Upload folder using huggingface_hub
about 2 months ago
special_tokens_map.json
Safe
613 Bytes
Upload folder using huggingface_hub
about 2 months ago
tokenizer.json
Safe
7.03 MB
Upload folder using huggingface_hub
about 2 months ago
tokenizer_config.json
Safe
4.3 kB
Upload folder using huggingface_hub
about 2 months ago
vocab.json
Safe
2.78 MB
Upload folder using huggingface_hub
about 2 months ago