--- base_model: - SanjiWatsuki/Kunoichi-DPO-v2-7B library_name: transformers tags: - mistral - quantized - text-generation-inference pipeline_tag: text-generation inference: false license: cc-by-nc-4.0 --- # **GGUF-Imatrix quantizations for [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B/).** *If you want any specific quantization to be added, feel free to ask.* All credits belong to the [creator](https://huggingface.co/SanjiWatsuki/). `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)` The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher. Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2277](https://github.com/ggerganov/llama.cpp/releases/tag/b2277). For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used. # Original model information: