File size: 942 Bytes
726dea9
 
 
 
 
 
 
 
 
 
9f2958b
726dea9
9f2958b
726dea9
 
 
9f2958b
726dea9
9f2958b
726dea9
f291517
726dea9
9f2958b
726dea9
 
 
1cd4d2a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
pipeline_tag: text-generation
inference: false
license: cc-by-nc-4.0
---
# **GGUF-Imatrix quantizations for [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B/).**

*If you want any specific quantization to be added, feel free to ask.*

All credits belong to the [creator](https://huggingface.co/SanjiWatsuki/).

`Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)`

The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher.

Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2277](https://github.com/ggerganov/llama.cpp/releases/tag/b2277).

For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used.

# Original model information: