Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
bartowski
/
Q25-1.5B-VeoLu-GGUF
like
1
Text Generation
GGUF
6 datasets
mergekit
Merge
llama-factory
lora
Inference Endpoints
imatrix
conversational
Model card
Files
Files and versions
Community
1
Deploy
Use this model
main
Q25-1.5B-VeoLu-GGUF
2 contributors
History:
27 commits
bartowski
inflatebot
Update README.md (
#1
)
d6d58de
verified
about 2 months ago
.gitattributes
Safe
2.96 kB
Upload Q25-1.5B-VeoLu.imatrix with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-IQ3_M.gguf
Safe
877 MB
LFS
Upload Q25-1.5B-VeoLu-IQ3_M.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-IQ3_XS.gguf
Safe
832 MB
LFS
Upload Q25-1.5B-VeoLu-IQ3_XS.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-IQ4_XS.gguf
Safe
1.02 GB
LFS
Upload Q25-1.5B-VeoLu-IQ4_XS.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q2_K.gguf
Safe
753 MB
LFS
Upload Q25-1.5B-VeoLu-Q2_K.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q2_K_L.gguf
Safe
981 MB
LFS
Upload Q25-1.5B-VeoLu-Q2_K_L.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q3_K_L.gguf
Safe
980 MB
LFS
Upload Q25-1.5B-VeoLu-Q3_K_L.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q3_K_S.gguf
Safe
861 MB
LFS
Upload Q25-1.5B-VeoLu-Q3_K_S.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q3_K_XL.gguf
Safe
1.18 GB
LFS
Upload Q25-1.5B-VeoLu-Q3_K_XL.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q4_0.gguf
Safe
1.07 GB
LFS
Upload Q25-1.5B-VeoLu-Q4_0.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q4_0_4_4.gguf
Safe
1.07 GB
LFS
Upload Q25-1.5B-VeoLu-Q4_0_4_4.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q4_0_4_8.gguf
Safe
1.07 GB
LFS
Upload Q25-1.5B-VeoLu-Q4_0_4_8.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q4_0_8_8.gguf
Safe
1.07 GB
LFS
Upload Q25-1.5B-VeoLu-Q4_0_8_8.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q4_K_L.gguf
Safe
1.29 GB
LFS
Upload Q25-1.5B-VeoLu-Q4_K_L.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q4_K_M.gguf
Safe
1.12 GB
LFS
Upload Q25-1.5B-VeoLu-Q4_K_M.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q4_K_S.gguf
Safe
1.07 GB
LFS
Upload Q25-1.5B-VeoLu-Q4_K_S.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q5_K_L.gguf
Safe
1.43 GB
LFS
Upload Q25-1.5B-VeoLu-Q5_K_L.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q5_K_M.gguf
Safe
1.29 GB
LFS
Upload Q25-1.5B-VeoLu-Q5_K_M.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q5_K_S.gguf
Safe
1.26 GB
LFS
Upload Q25-1.5B-VeoLu-Q5_K_S.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q6_K.gguf
Safe
1.46 GB
LFS
Upload Q25-1.5B-VeoLu-Q6_K.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q6_K_L.gguf
Safe
1.58 GB
LFS
Upload Q25-1.5B-VeoLu-Q6_K_L.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-Q8_0.gguf
Safe
1.89 GB
LFS
Upload Q25-1.5B-VeoLu-Q8_0.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu-f16.gguf
Safe
3.56 GB
LFS
Upload Q25-1.5B-VeoLu-f16.gguf with huggingface_hub
about 2 months ago
Q25-1.5B-VeoLu.imatrix
Safe
2.04 MB
LFS
Upload Q25-1.5B-VeoLu.imatrix with huggingface_hub
about 2 months ago
README.md
Safe
9.58 kB
Update README.md (#1)
about 2 months ago