Lewdiculous commited on
Commit
9f2958b
1 Parent(s): f291517

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -47
README.md CHANGED
@@ -1,70 +1,28 @@
1
  ---
2
  base_model:
3
  - SanjiWatsuki/Kunoichi-DPO-v2-7B
4
- - Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
5
  library_name: transformers
6
  tags:
7
  - mistral
8
  - quantized
9
  - text-generation-inference
10
- - merge
11
- - mergekit
12
  pipeline_tag: text-generation
13
  inference: false
 
14
  ---
15
- # **GGUF-Imatrix quantizations for [Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/).**
16
-
17
- ## *This has been my personal favourite and daily-driver role-play model for a while, so I decided to make new quantizations for it using the full F16-Imatrix data.*
18
-
19
- SillyTavern preset files are located [here](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/tree/main/ST%20presets).
20
 
21
  *If you want any specific quantization to be added, feel free to ask.*
22
 
23
- All credits belong to the [creator](https://huggingface.co/Test157t/).
24
 
25
- `Base⇢ GGUF(F16)⇢ GGUF(Quants)`
26
 
27
  The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher.
28
 
29
- Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2254](https://github.com/ggerganov/llama.cpp/releases/tag/b2254).
30
 
31
  For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used.
32
 
33
  # Original model information:
34
 
35
- Thanks to @Epiculous for the dope model/ help with llm backends and support overall.
36
-
37
- Id like to also thank @kalomaze for the dope sampler additions to ST.
38
-
39
- @SanjiWatsuki Thank you very much for the help, and the model!
40
-
41
- ST users can find the TextGenPreset in the folder labeled so.
42
-
43
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg)
44
-
45
- The following models were included in the merge:
46
- * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
47
- * [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context)
48
-
49
- ### Configuration
50
-
51
- The following YAML configuration was used to produce this model:
52
-
53
- ```yaml
54
- slices:
55
- - sources:
56
- - model: SanjiWatsuki/Kunoichi-DPO-v2-7B
57
- layer_range: [0, 32]
58
- - model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
59
- layer_range: [0, 32]
60
- merge_method: slerp
61
- base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
62
- parameters:
63
- t:
64
- - filter: self_attn
65
- value: [0, 0.5, 0.3, 0.7, 1]
66
- - filter: mlp
67
- value: [1, 0.5, 0.7, 0.3, 0]
68
- - value: 0.5
69
- dtype: bfloat16
70
- ```
 
1
  ---
2
  base_model:
3
  - SanjiWatsuki/Kunoichi-DPO-v2-7B
 
4
  library_name: transformers
5
  tags:
6
  - mistral
7
  - quantized
8
  - text-generation-inference
 
 
9
  pipeline_tag: text-generation
10
  inference: false
11
+ license: cc-by-nc-4.0
12
  ---
13
+ # **GGUF-Imatrix quantizations for [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B/).**
 
 
 
 
14
 
15
  *If you want any specific quantization to be added, feel free to ask.*
16
 
17
+ All credits belong to the [creator](https://huggingface.co/SanjiWatsuki/).
18
 
19
+ `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)`
20
 
21
  The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher.
22
 
23
+ Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2277](https://github.com/ggerganov/llama.cpp/releases/tag/b2277).
24
 
25
  For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used.
26
 
27
  # Original model information:
28