Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
qwp4w3hyb
/
Cerebrum-1.0-8x7b-imatrix-GGUF
like
1
GGUF
mixtral
conversational
finetune
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Cerebrum-1.0-8x7b-imatrix-GGUF
2 contributors
History:
21 commits
qwp4w3hyb
Update README.md
3d681bc
verified
10 months ago
.gitattributes
1.63 kB
Track all ggufs
10 months ago
README.md
1.02 kB
Update README.md
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ1_S.gguf
9.82 GB
LFS
Add cerebrum-1.0-8x7b-q8imatrix-IQ1_S.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ2_M.gguf
15.5 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ2_M.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ2_S.gguf
14.1 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ2_S.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ2_XS.gguf
13.9 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ2_XS.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ3_M.gguf
21.4 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ3_M.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ3_S.gguf
20.4 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ3_S.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ3_XS.gguf
19.3 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ3_XS.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ4_NL.gguf
26.5 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ4_NL.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-IQ4_XS.gguf
25.1 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-IQ4_XS.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-Q4_K_M.gguf
28.4 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-Q4_K_M.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-Q4_K_S.gguf
26.7 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-Q4_K_S.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-Q5_K_M.gguf
33.2 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-Q5_K_M.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-Q5_K_S.gguf
32.2 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-Q5_K_S.gguf
10 months ago
cerebrum-1.0-8x7b-q8imatrix-Q6_K.gguf
38.4 GB
LFS
Upload cerebrum-1.0-8x7b-q8imatrix-Q6_K.gguf
10 months ago
imatrix_q8_cb512_24ch_gmerged.dat
25.7 MB
LFS
Add imatrix generated from q8
10 months ago