Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Azure99
/
blossom-v5-4b-gguf
like
0
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
9e20bd2
blossom-v5-4b-gguf
47.3 GB
1 contributor
History:
14 commits
Azure99
Upload model-q6_k.gguf with huggingface_hub
9e20bd2
verified
over 1 year ago
.gitattributes
2.24 kB
Upload model-q6_k.gguf with huggingface_hub
over 1 year ago
README.md
Safe
28 Bytes
initial commit
over 1 year ago
ggml-model-f16.gguf
7.91 GB
xet
Upload ggml-model-f16.gguf with huggingface_hub
over 1 year ago
ggml-model-q4_k_m.gguf
2.46 GB
xet
Upload ggml-model-q4_k_m.gguf with huggingface_hub
over 1 year ago
ggml-model-q4_k_s.gguf
2.34 GB
xet
Upload ggml-model-q4_k_s.gguf with huggingface_hub
over 1 year ago
ggml-model-q5_k_m.gguf
2.84 GB
xet
Upload ggml-model-q5_k_m.gguf with huggingface_hub
over 1 year ago
ggml-model-q5_k_s.gguf
2.78 GB
xet
Upload ggml-model-q5_k_s.gguf with huggingface_hub
over 1 year ago
ggml-model-q6_k.gguf
3.25 GB
xet
Upload ggml-model-q6_k.gguf with huggingface_hub
over 1 year ago
ggml-model-q8_0.gguf
4.2 GB
xet
Upload ggml-model-q8_0.gguf with huggingface_hub
over 1 year ago
model-f16.gguf
7.91 GB
xet
Upload model-f16.gguf with huggingface_hub
over 1 year ago
model-q4_k_m.gguf
2.46 GB
xet
Upload model-q4_k_m.gguf with huggingface_hub
over 1 year ago
model-q4_k_s.gguf
2.34 GB
xet
Upload model-q4_k_s.gguf with huggingface_hub
over 1 year ago
model-q5_k_m.gguf
2.84 GB
xet
Upload model-q5_k_m.gguf with huggingface_hub
over 1 year ago
model-q5_k_s.gguf
2.78 GB
xet
Upload model-q5_k_s.gguf with huggingface_hub
over 1 year ago
model-q6_k.gguf
3.25 GB
xet
Upload model-q6_k.gguf with huggingface_hub
over 1 year ago