Lewdiculous commited on
Commit
5f2d88f
1 Parent(s): 9f2958b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -5
README.md CHANGED
@@ -1,28 +1,71 @@
1
  ---
2
  base_model:
3
  - SanjiWatsuki/Kunoichi-DPO-v2-7B
 
4
  library_name: transformers
5
  tags:
6
  - mistral
7
  - quantized
8
  - text-generation-inference
 
 
9
  pipeline_tag: text-generation
10
  inference: false
11
- license: cc-by-nc-4.0
12
  ---
13
- # **GGUF-Imatrix quantizations for [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B/).**
 
 
 
 
14
 
15
  *If you want any specific quantization to be added, feel free to ask.*
16
 
17
- All credits belong to the [creator](https://huggingface.co/SanjiWatsuki/).
18
 
19
- `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)`
20
 
21
  The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher.
22
 
23
- Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2277](https://github.com/ggerganov/llama.cpp/releases/tag/b2277).
24
 
25
  For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used.
26
 
27
  # Original model information:
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  base_model:
3
  - SanjiWatsuki/Kunoichi-DPO-v2-7B
4
+ - Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
5
  library_name: transformers
6
  tags:
7
  - mistral
8
  - quantized
9
  - text-generation-inference
10
+ - merge
11
+ - mergekit
12
  pipeline_tag: text-generation
13
  inference: false
14
+
15
  ---
16
+ # **GGUF-Imatrix quantizations for [Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/).**
17
+
18
+ ## *This has been my personal favourite and daily-driver role-play model for a while, so I decided to make new quantizations for it using the full F16-Imatrix data.*
19
+
20
+ SillyTavern preset files are located [here](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/tree/main/ST%20presets).
21
 
22
  *If you want any specific quantization to be added, feel free to ask.*
23
 
24
+ All credits belong to the [creator](https://huggingface.co/Test157t/).
25
 
26
+ `Base⇢ GGUF(F16)⇢ GGUF(Quants)`
27
 
28
  The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher.
29
 
30
+ Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2254](https://github.com/ggerganov/llama.cpp/releases/tag/b2254).
31
 
32
  For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used.
33
 
34
  # Original model information:
35
 
36
+ Thanks to @Epiculous for the dope model/ help with llm backends and support overall.
37
+
38
+ Id like to also thank @kalomaze for the dope sampler additions to ST.
39
+
40
+ @SanjiWatsuki Thank you very much for the help, and the model!
41
+
42
+ ST users can find the TextGenPreset in the folder labeled so.
43
+
44
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg)
45
+
46
+ The following models were included in the merge:
47
+ * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
48
+ * [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context)
49
+
50
+ ### Configuration
51
+
52
+ The following YAML configuration was used to produce this model:
53
+
54
+ ```yaml
55
+ slices:
56
+ - sources:
57
+ - model: SanjiWatsuki/Kunoichi-DPO-v2-7B
58
+ layer_range: [0, 32]
59
+ - model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
60
+ layer_range: [0, 32]
61
+ merge_method: slerp
62
+ base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
63
+ parameters:
64
+ t:
65
+ - filter: self_attn
66
+ value: [0, 0.5, 0.3, 0.7, 1]
67
+ - filter: mlp
68
+ value: [1, 0.5, 0.7, 0.3, 0]
69
+ - value: 0.5
70
+ dtype: bfloat16
71
+ ```