redrix's picture
Update README.md
6a17e5e verified
metadata
base_model:
  - inflatebot/MN-12B-Mag-Mell-R1
  - TheDrummer/UnslopNemo-12B-v4.1
library_name: transformers
tags:
  - mergekit
  - merge
  - 12b
  - chat
  - roleplay
  - creative-writing
  - SLERP
license: apache-2.0
new_version: redrix/patricide-12B-Unslop-Mell-v2

patricide-12B-Unslop-Mell

The sins of the Father shan't ever be repeated this way.

PatricideLogo.png

This is a merge of pre-trained language models created using mergekit.

This is my first merge — I still have no idea how writing the parameters in the config actually works. (Update: I figured it out) If anyone has more extensive guides for merging, please let me know. I would also like to get into the science behind all this.

Both models produced enjoyable results, so I decided to merge them, to create a model hopefully inheriting good traits of the parents. (Update: The early testing of this model revealed good coherency, but it sometimes spits out unintelligeble gibberish or made-up words. This is likely due to the broken tokenizer)

I've tested this model on the Q6_K GGUF Quant and it provided satisfactory results, thus I decided to upload it. Although I've not extensively tested it in Storywriting nor RP, the results were stable and at least coherent. I tested it on a Temperature of 1 (Temperature last) and Min-P of 0.1. I don't know the effects DRY or XTC have on the stability of the output, or how it fares on high context sizes. Both parent models use the ChatML Template. Although Unslop-Nemo also uses Metharme/Pygmalion. I've not yet tested which works better. (Update: Mergekit introduced a feature to define the template; I will force it to use ChatML in my next models, so it has an all-around standard.)

Feel free to experiment, as I am only experimenting myself.

Update: I will likely release my next models once I am able to run them, without too much fine-tuning of samplers/parameters/text templates/etc. Extensive testing as per DavidAU's approach will be done afterwards, so I may gain more impressions, while being able to work on new models already. I would like to create models that are very good in their base states, with samplers being the thing to perfect them. As such I won't spend too much time finetuning samplers, unless the model's base state is very promising.

Quantization

Static GGUF Quants available at:

Weighted/Imatrix GGUF Quants available at mradermacher/patricide-12B-Unslop-Mell-i1-GGUF.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: TheDrummer/UnslopNemo-12B-v4.1
  - model: inflatebot/MN-12B-Mag-Mell-R1
merge_method: slerp
base_model: TheDrummer/UnslopNemo-12B-v4.1
dtype: bfloat16
parameters:
  t: [0, 0.5, 1, 0.5, 0]

I made the cover art myself in Photoshop... I don't use AI for stuff like that.