File size: 3,978 Bytes
64c482f
 
 
 
 
 
 
 
af2053d
 
a7857d8
 
f29a4c1
f575f2b
6a17e5e
64c482f
da0da49
64c482f
 
ca0934d
 
64c482f
 
140ece9
 
 
 
2f1a849
64c482f
 
 
140ece9
 
b6135b5
12799e2
2f1a849
05bd55f
b6135b5
7eefba4
 
64c482f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61fe967
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
base_model:
- inflatebot/MN-12B-Mag-Mell-R1
- TheDrummer/UnslopNemo-12B-v4.1
library_name: transformers
tags:
- mergekit
- merge
- 12b
- chat
- roleplay
- creative-writing
- SLERP
license: apache-2.0
new_version: redrix/patricide-12B-Unslop-Mell-v2
---
# patricide-12B-Unslop-Mell
>The sins of the Father shan't ever be repeated this way.

![PatricideLogo.png](https://cdn-uploads.huggingface.co/production/uploads/674c58de6bfa8d3e4ff8dcf3/pdKS7W4futo8XgqRaT8Rb.png)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

This is my first merge — I still have no idea how writing the parameters in the config actually works. (**Update:** I figured it out) If anyone has more extensive guides for merging, please let me know. I would also like to get into the science behind all this.

Both models produced enjoyable results, so I decided to merge them, to create a model hopefully inheriting good traits of the parents. (**Update:** The early testing of this model revealed good coherency, but it sometimes spits out unintelligeble gibberish or made-up words. This is likely due to the broken tokenizer)

I've tested this model on the [**Q6_K GGUF**](https://huggingface.co/redrix/patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q6_K.gguf) Quant and it provided satisfactory results, thus I decided to upload it. Although I've not extensively tested it in Storywriting nor RP, the results were stable and *at least* coherent. I tested it on a **Temperature of 1** (Temperature last) and **Min-P of 0.1**. I don't know the effects **DRY** or **XTC** have on the stability of the output, or how it fares on high context sizes. Both parent models use the **ChatML** Template. Although [Unslop-Nemo](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) also uses **Metharme/Pygmalion**. I've not yet tested which works better. (**Update:** Mergekit introduced a feature to define the template; I will force it to use ChatML in my next models, so it has an all-around standard.)

Feel free to experiment, as I am only experimenting myself.

**Update:** I will likely release my next models once I am able to run them, without too much fine-tuning of samplers/parameters/text templates/etc. Extensive testing as per [DavidAU's approach](https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters) will be done afterwards, so I may gain more impressions, while being able to work on new models already. I would like to create models that are very good in their base states, with samplers being the thing to perfect them. As such I won't spend too much time finetuning samplers, unless the model's base state is very promising.

# Quantization
Static **GGUF** Quants available at:
- [redrix/patricide-12B-Unslop-Mell-GGUF](https://huggingface.co/redrix/patricide-12B-Unslop-Mell-GGUF) (has less quants than below ⬇️)
- [mradermacher/patricide-12B-Unslop-Mell-GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-GGUF) (Thanks ♥️)

Weighted/Imatrix **GGUF** Quants available at [mradermacher/patricide-12B-Unslop-Mell-i1-GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-i1-GGUF).

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
* [TheDrummer/UnslopNemo-12B-v4.1](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: TheDrummer/UnslopNemo-12B-v4.1
  - model: inflatebot/MN-12B-Mag-Mell-R1
merge_method: slerp
base_model: TheDrummer/UnslopNemo-12B-v4.1
dtype: bfloat16
parameters:
  t: [0, 0.5, 1, 0.5, 0]

```

> I made the cover art myself in Photoshop... I don't use AI for stuff like that.