GGUF

[Request #54] - Click the link for more context.
concedo/KobbleTinyV2-1.1B

Use with the latest version of KoboldCpp, or this more up-to-date fork if you have issues.

⇲ Click here to expand/hide information – General chart with relative quant parformances.

Recommended read:

"Which GGUF is right for me? (Opinionated)" by Artefact2

Click the image to view full size. "Which GGUF is right for me? (Opinionated)" by Artefact2 - Firs Graph

image

Downloads last month
74
GGUF
Model size
1.1B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Collection including LWDCLS/KobbleTinyV2-1.1B-GGUF-IQ-Imatrix-Request