redrix commited on
Commit
b6135b5
·
verified ·
1 Parent(s): a7857d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -8,25 +8,26 @@ tags:
8
  - merge
9
  - 12b
10
  - chat
11
- - creative
12
  - roleplay
13
- - conversational
14
  - creative-writing
15
  ---
16
  # patricide-Unslop-Mell
17
  >The sins of the Father shan't ever be repeated this way.
18
 
19
- # WARNING: Something went wrong during the upload! It will be fixed soon.
20
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
21
 
22
  This is my first merge — I still have no idea how writing the parameters in the config actually works. If anyone has more extensive guides for merging, please let me know. I would also like to get into the science behind all this.
23
 
24
  Both models produced enjoyable results, so I decided to merge them, to create a model hopefully inheriting good traits of the parents.
25
 
26
- I've tested this model on the *Q_6K GGUF* Quant (will get uploaded later) and it provided satisfactory results, thus I decided to upload it. Although I've not extensively tested it in Storywriting nor RP, the results were stable and *at least* coherent. I tested it on a **Temperature of 1** (Temperature last) and **Min-P of 0.1**. I don't know the effects **DRY** or **XTC** have on the stability of the output, or how it fares on high context sizes. Both parent models use the **ChatML** Template. Although [Unslop-Nemo](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) also uses **Metharme/Pygmalion**. I've not yet tested which works better.
27
 
28
  Feel free to experiment, as I am only experimenting myself.
29
 
 
 
 
30
  ## Merge Details
31
  ### Merge Method
32
 
 
8
  - merge
9
  - 12b
10
  - chat
 
11
  - roleplay
 
12
  - creative-writing
13
  ---
14
  # patricide-Unslop-Mell
15
  >The sins of the Father shan't ever be repeated this way.
16
 
17
+ # WARNING: Something went wrong during the upload! It will be fixed soon. GGUF works, though.
18
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
 
20
  This is my first merge — I still have no idea how writing the parameters in the config actually works. If anyone has more extensive guides for merging, please let me know. I would also like to get into the science behind all this.
21
 
22
  Both models produced enjoyable results, so I decided to merge them, to create a model hopefully inheriting good traits of the parents.
23
 
24
+ I've tested this model on the *Q_6K GGUF* Quant and it provided satisfactory results, thus I decided to upload it. Although I've not extensively tested it in Storywriting nor RP, the results were stable and *at least* coherent. I tested it on a **Temperature of 1** (Temperature last) and **Min-P of 0.1**. I don't know the effects **DRY** or **XTC** have on the stability of the output, or how it fares on high context sizes. Both parent models use the **ChatML** Template. Although [Unslop-Nemo](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) also uses **Metharme/Pygmalion**. I've not yet tested which works better.
25
 
26
  Feel free to experiment, as I am only experimenting myself.
27
 
28
+ # Quantization
29
+ Static **GGUF** Quants available at [redrix/patricide-Unslop-Mell-GGUF](https://huggingface.co/redrix/patricide-Unslop-Mell).
30
+
31
  ## Merge Details
32
  ### Merge Method
33