|
--- |
|
base_model: |
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
- Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context |
|
library_name: transformers |
|
tags: |
|
- mistral |
|
- quantized |
|
- text-generation-inference |
|
- merge |
|
- mergekit |
|
pipeline_tag: text-generation |
|
inference: false |
|
--- |
|
# **GGUF-Imatrix quantizations for [Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/).** |
|
|
|
## *This has been my personal favourite and daily-driver role-play model for a while, so I decided to make new quantizations for it using the full F16-Imatrix data.* |
|
|
|
SillyTavern preset files are located [here](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/tree/main/ST%20presets). |
|
|
|
*If you want any specific quantization to be added, feel free to ask.* |
|
|
|
All credits belong to the [creator](https://huggingface.co/Test157t/). |
|
|
|
`Base⇢ GGUF(F16)⇢ GGUF(Quants)` |
|
|
|
The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher. |
|
|
|
Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2254](https://github.com/ggerganov/llama.cpp/releases/tag/b2254). |
|
|
|
For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used. |
|
|
|
# Original model information: |
|
|
|
Thanks to @Epiculous for the dope model/ help with llm backends and support overall. |
|
|
|
Id like to also thank @kalomaze for the dope sampler additions to ST. |
|
|
|
@SanjiWatsuki Thank you very much for the help, and the model! |
|
|
|
ST users can find the TextGenPreset in the folder labeled so. |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg) |
|
|
|
The following models were included in the merge: |
|
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) |
|
* [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
layer_range: [0, 32] |
|
- model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |