Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
gguf
/
Yarn-Mistral-7b-128k-GGUF
like
0
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Yarn-Mistral-7b-128k-GGUF
1 contributor
History:
2 commits
gguf
Upload folder using huggingface_hub
3add52a
verified
about 1 year ago
.gitattributes
Safe
1.93 kB
Upload folder using huggingface_hub
about 1 year ago
yarn-mistral-7b-128k.Q2_K.gguf
2.72 GB
LFS
Upload folder using huggingface_hub
about 1 year ago
yarn-mistral-7b-128k.Q3_K_M.gguf
3.52 GB
LFS
Upload folder using huggingface_hub
about 1 year ago
yarn-mistral-7b-128k.Q4_K_M.gguf
4.37 GB
LFS
Upload folder using huggingface_hub
about 1 year ago
yarn-mistral-7b-128k.Q5_K_M.gguf
5.13 GB
LFS
Upload folder using huggingface_hub
about 1 year ago
yarn-mistral-7b-128k.Q6_K.gguf
5.94 GB
LFS
Upload folder using huggingface_hub
about 1 year ago
yarn-mistral-7b-128k.Q8_0.gguf
7.7 GB
LFS
Upload folder using huggingface_hub
about 1 year ago