Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
knifeayumu
/
LLM_Collection
like
10
GGUF
imatrix
conversational
Model card
Files
Files and versions
Community
1
Deploy
Use this model
YAML Metadata Warning:
empty or missing yaml metadata in repo card (
https://huggingface.co/docs/hub/model-cards#model-card-metadata
)
Notable Deleted Models:
Notable Deleted Models:
OnlyChat-Miqu-v1.q4_k_m.gguf
(It was up for few hours [?] by OnlyThings and got deleted on HF)
NeteLegacy-13B.q5_k_m.gguf
(I believe it's the first version of Nete by Undi95, deleted due to of being too NovelAI)
Downloads last month
13,064
GGUF
Model size
32.3B params
Architecture
command-r
Hardware compatibility
Log In
to view the estimation
4-bit
IQ4_XS
176 GB
Q4_K_M
103 GB
Q4_K_M
42.5 GB
Q4_K_M
42.5 GB
Q4_K_M
42.5 GB
Q4_K_M
41.7 GB
Q4_K_M
41.4 GB
Q4_K_M
41.4 GB
Q4_K_M
41.4 GB
Q4_K_M
41.4 GB
Q4_K_M
41.4 GB
Q4_K_M
41.4 GB
5-bit
Q5_K_M
118 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
14.2 GB
Q5_K_M
14.2 GB
Q5_K_M
14.2 GB
Q5_K_M
14.2 GB
Q5_K_M
14.2 GB
Q5_K_M
14.2 GB
Q5_K_M
17.7 GB
Q5_K_M
48.8 GB
Q5_K_M
33.2 GB
Q5_K_M
33.2 GB
Q5_K_M
32.2 GB
Q5_K_M
33.2 GB
Q5_K_M
32.2 GB
Q5_K_M
33.2 GB
6-bit
Q6_K
1.4 TB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
38.5 GB
Q6_K
38.4 GB
Q6_K
8.81 GB
8-bit
Q8_0
87.3 GB
Q8_0
34.3 GB
Q8_0
34.3 GB
Q8_0
37.2 GB
Q8_0
37.2 GB
Q8_0
28.9 GB
Q8_0
28.9 GB
Q8_0
28.9 GB
Q8_0
9.83 GB
Q8_0
9.83 GB
Q8_0
28.7 GB
Q8_0
28.7 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.54 GB
Q8_0
8.53 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
13 GB
Q8_0
21.7 GB
Q8_0
21.7 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
23.6 GB
Q8_0
25.1 GB
Q8_0
25.1 GB
Q8_0
25.1 GB
Q8_0
25.1 GB
Q8_0
34.3 GB
Q8_0
34.8 GB
Q8_0
34.8 GB
Q8_0
34.8 GB
Q8_0
3.29 GB
Q8_0
22.2 GB
Q8_0
11.4 GB
Q8_0
11.4 GB
Q8_0
11.4 GB
16-bit
F16
5.24 GB
View +1 variant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Collection including
knifeayumu/LLM_Collection
Archives
Collection
3 items
โข
Updated
Oct 27, 2024