L3.1-Niitorm-8B-DPO-t0.0001 (GGUFs)

This model was converted to GGUF format from v000000/L3.1-Niitorm-8B-DPO-t0.0001 using llama.cpp. Refer to the original model card for more details on the model.

image/png

Ordered by quality:

  • q8_0 imatrix
  • q6_k imatrix
  • q5_k_s imatrix
  • q4_k_s imatrix
  • iq4_xs imatrix

imatrix data (V2 - 287kb) randomized bartowski, kalomeze groups, ERP/RP snippets, working gpt4 code, toxic qa, human messaging, randomized posts, story, novels

Downloads last month
156
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for v000000/L3.1-Niitorm-8B-DPO-t0.0001-GGUFs-IMATRIX

Quantized
(11)
this model

Collection including v000000/L3.1-Niitorm-8B-DPO-t0.0001-GGUFs-IMATRIX