Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
QwQ-32B-Coder-Fusion-9010-4bit
like
1
Follow
MLX Community
2.77k
Text Generation
Transformers
Safetensors
MLX
English
qwen2
chat
abliterated
uncensored
mlx-my-repo
conversational
text-generation-inference
Inference Endpoints
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
062f5f6
QwQ-32B-Coder-Fusion-9010-4bit
1 contributor
History:
9 commits
Felladrin
Upload added_tokens.json with huggingface_hub
062f5f6
verified
17 days ago
.gitattributes
Safe
1.52 kB
initial commit
17 days ago
added_tokens.json
Safe
605 Bytes
Upload added_tokens.json with huggingface_hub
17 days ago
model-00001-of-00004.safetensors
Safe
5.37 GB
LFS
Upload model-00001-of-00004.safetensors with huggingface_hub
17 days ago
model-00002-of-00004.safetensors
Safe
5.34 GB
LFS
Upload model-00002-of-00004.safetensors with huggingface_hub
17 days ago
model-00003-of-00004.safetensors
Safe
5.37 GB
LFS
Upload model-00003-of-00004.safetensors with huggingface_hub
17 days ago
model-00004-of-00004.safetensors
Safe
2.36 GB
LFS
Upload model-00004-of-00004.safetensors with huggingface_hub
17 days ago
model.safetensors.index.json
Safe
143 kB
Upload model.safetensors.index.json with huggingface_hub
17 days ago
special_tokens_map.json
Safe
613 Bytes
Upload special_tokens_map.json with huggingface_hub
17 days ago
tokenizer_config.json
Safe
7.3 kB
Upload tokenizer_config.json with huggingface_hub
17 days ago