Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
xLAM-8x22b-r-GGUF
like
1
Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
imatrix
conversational
Model card
Files
Files and versions
Community
6
Use this model
refs/pr/4
xLAM-8x22b-r-GGUF
1 contributor
History:
6 commits
MaziyarPanahi
Update README.md
eefb219
verified
4 months ago
.gitattributes
2.91 kB
Upload folder using huggingface_hub (#3)
4 months ago
README.md
2.96 kB
Update README.md
4 months ago
xLAM-8x22b-r.IQ1_M.gguf
32.7 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
xLAM-8x22b-r.IQ1_S.gguf
29.7 GB
LFS
Upload xLAM-8x22b-r.IQ1_S.gguf with huggingface_hub
4 months ago
xLAM-8x22b-r.IQ2_XS.gguf
42 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00001-of-00005.gguf
13.5 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00002-of-00005.gguf
12.6 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00003-of-00005.gguf
13.2 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00004-of-00005.gguf
13.3 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ3_XS.gguf-00005-of-00005.gguf
5.61 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00001-of-00005.gguf
17.1 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00002-of-00005.gguf
16.6 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00003-of-00005.gguf
17.4 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00004-of-00005.gguf
17.4 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.IQ4_XS.gguf-00005-of-00005.gguf
6.86 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00001-of-00005.gguf
11.8 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00002-of-00005.gguf
11.4 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00003-of-00005.gguf
12 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00004-of-00005.gguf
12 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago
xLAM-8x22b-r.Q2_K.gguf-00005-of-00005.gguf
4.79 GB
LFS
Upload folder using huggingface_hub (#3)
4 months ago