Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
xLAM-8x22b-r-GGUF
like
1
Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
imatrix
conversational
Model card
Files
Files and versions
Community
6
Use this model
89036a5
xLAM-8x22b-r-GGUF
1 contributor
History:
9 commits
MaziyarPanahi
Upload xLAM-8x22b-r.Q5_K_S.gguf-00005-of-00005.gguf with huggingface_hub
89036a5
verified
4 months ago
.gitattributes
3.15 kB
Upload xLAM-8x22b-r.Q5_K_S.gguf-00005-of-00005.gguf with huggingface_hub
4 months ago
README.md
2.96 kB
Update README.md (#4)
4 months ago
xLAM-8x22b-r.IQ1_M.gguf
32.7 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
xLAM-8x22b-r.IQ1_S.gguf
29.7 GB
LFS
Upload xLAM-8x22b-r.IQ1_S.gguf with huggingface_hub
5 months ago
xLAM-8x22b-r.IQ2_XS.gguf
42 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00001-of-00005.gguf
13.5 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00002-of-00005.gguf
12.6 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00003-of-00005.gguf
13.2 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00004-of-00005.gguf
13.3 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00005-of-00005.gguf
5.61 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00001-of-00005.gguf
17.1 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00002-of-00005.gguf
16.6 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00003-of-00005.gguf
17.4 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00004-of-00005.gguf
17.4 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00005-of-00005.gguf
6.86 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00001-of-00005.gguf
11.8 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00002-of-00005.gguf
11.4 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00003-of-00005.gguf
12 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00004-of-00005.gguf
12 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00005-of-00005.gguf
4.79 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q3_K_M.gguf-00005-of-00005.gguf
6.15 GB
LFS
Upload xLAM-8x22b-r.Q3_K_M.gguf-00005-of-00005.gguf with huggingface_hub
4 months ago
xLAM-8x22b-r.Q4_K_S.gguf-00002-of-00005.gguf
17.6 GB
LFS
Upload xLAM-8x22b-r.Q4_K_S.gguf-00002-of-00005.gguf with huggingface_hub
4 months ago
xLAM-8x22b-r.Q5_K_S.gguf-00005-of-00005.gguf
8.77 GB
LFS
Upload xLAM-8x22b-r.Q5_K_S.gguf-00005-of-00005.gguf with huggingface_hub
4 months ago