Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
hs-hf
/
madlad400-3b-mt-gguf
like
0
Translation
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
madlad400-3b-mt-gguf
1 contributor
History:
3 commits
hs-hf
Create README.md
15a0654
verified
about 2 months ago
.gitattributes
Safe
1.96 kB
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago
README.md
Safe
90 Bytes
Create README.md
about 2 months ago
madlad400-3b-mt-FP16.gguf
Safe
5.89 GB
LFS
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago
madlad400-3b-mt-q4_k_m.gguf
Safe
1.86 GB
LFS
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago
madlad400-3b-mt-q4_k_s.gguf
Safe
1.73 GB
LFS
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago
madlad400-3b-mt-q5_k_m.gguf
Safe
2.13 GB
LFS
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago
madlad400-3b-mt-q5_k_s.gguf
Safe
2.06 GB
LFS
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago
madlad400-3b-mt-q6_k.gguf
Safe
2.42 GB
LFS
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago
madlad400-3b-mt-q8_0.gguf
Safe
3.13 GB
LFS
Converted the google/madlad400-3b-mt model into quantized GGUF format
about 2 months ago