redrix's picture
Added fancy cover
ca0934d verified
|
raw
history blame
2.81 kB
metadata
base_model:
  - inflatebot/MN-12B-Mag-Mell-R1
  - TheDrummer/UnslopNemo-12B-v4.1
library_name: transformers
tags:
  - mergekit
  - merge
  - 12b
  - chat
  - roleplay
  - creative-writing

patricide-12B-Unslop-Mell

The sins of the Father shan't ever be repeated this way.

PatricideLogo.png

This is a merge of pre-trained language models created using mergekit.

This is my first merge — I still have no idea how writing the parameters in the config actually works. If anyone has more extensive guides for merging, please let me know. I would also like to get into the science behind all this.

Both models produced enjoyable results, so I decided to merge them, to create a model hopefully inheriting good traits of the parents.

I've tested this model on the Q6_K GGUF Quant and it provided satisfactory results, thus I decided to upload it. Although I've not extensively tested it in Storywriting nor RP, the results were stable and at least coherent. I tested it on a Temperature of 1 (Temperature last) and Min-P of 0.1. I don't know the effects DRY or XTC have on the stability of the output, or how it fares on high context sizes. Both parent models use the ChatML Template. Although Unslop-Nemo also uses Metharme/Pygmalion. I've not yet tested which works better.

Feel free to experiment, as I am only experimenting myself.

Quantization

Static GGUF Quants available at:

Weighted/Imatrix GGUF Quants available at mradermacher/patricide-12B-Unslop-Mell-i1-GGUF.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: TheDrummer/UnslopNemo-12B-v4.1
  - model: inflatebot/MN-12B-Mag-Mell-R1
merge_method: slerp
base_model: TheDrummer/UnslopNemo-12B-v4.1
dtype: bfloat16
parameters:
  t: [0, 0.5, 1, 0.5, 0]