This repo only contains the Q8, Q6, Q5, & Q4 GGUF files of Siithamo v0.3

For the details of this model, please refer to the orginal model card here

Downloads last month
16
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kromquant/L3.1-Siithamo-v0.3-8B-GGUFs

Quantized
(3)
this model

Space using kromquant/L3.1-Siithamo-v0.3-8B-GGUFs 1