--- base_model: - inflatebot/MN-12B-Mag-Mell-R1 - TheDrummer/UnslopNemo-12B-v4.1 library_name: transformers tags: - mergekit - merge - 12b - chat - roleplay - creative-writing - SLERP license: apache-2.0 new_version: redrix/patricide-12B-Unslop-Mell-v2 --- # patricide-12B-Unslop-Mell >The sins of the Father shan't ever be repeated this way. ![PatricideLogo.png](https://cdn-uploads.huggingface.co/production/uploads/674c58de6bfa8d3e4ff8dcf3/pdKS7W4futo8XgqRaT8Rb.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This is my first merge — I still have no idea how writing the parameters in the config actually works. (**Update:** I figured it out) If anyone has more extensive guides for merging, please let me know. I would also like to get into the science behind all this. Both models produced enjoyable results, so I decided to merge them, to create a model hopefully inheriting good traits of the parents. (**Update:** The early testing of this model revealed good coherency, but it sometimes spits out unintelligeble gibberish or made-up words. This is likely due to the broken tokenizer) I've tested this model on the [**Q6_K GGUF**](https://huggingface.co/redrix/patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q6_K.gguf) Quant and it provided satisfactory results, thus I decided to upload it. Although I've not extensively tested it in Storywriting nor RP, the results were stable and *at least* coherent. I tested it on a **Temperature of 1** (Temperature last) and **Min-P of 0.1**. I don't know the effects **DRY** or **XTC** have on the stability of the output, or how it fares on high context sizes. Both parent models use the **ChatML** Template. Although [Unslop-Nemo](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) also uses **Metharme/Pygmalion**. I've not yet tested which works better. (**Update:** Mergekit introduced a feature to define the template; I will force it to use ChatML in my next models, so it has an all-around standard.) Feel free to experiment, as I am only experimenting myself. **Update:** I will likely release my next models once I am able to run them, without too much fine-tuning of samplers/parameters/text templates/etc. Extensive testing as per [DavidAU's approach](https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters) will be done afterwards, so I may gain more impressions, while being able to work on new models already. I would like to create models that are very good in their base states, with samplers being the thing to perfect them. As such I won't spend too much time finetuning samplers, unless the model's base state is very promising. # Quantization Static **GGUF** Quants available at: - [redrix/patricide-12B-Unslop-Mell-GGUF](https://huggingface.co/redrix/patricide-12B-Unslop-Mell-GGUF) (has less quants than below ⬇️) - [mradermacher/patricide-12B-Unslop-Mell-GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-GGUF) (Thanks ♥️) Weighted/Imatrix **GGUF** Quants available at [mradermacher/patricide-12B-Unslop-Mell-i1-GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-i1-GGUF). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) * [TheDrummer/UnslopNemo-12B-v4.1](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/UnslopNemo-12B-v4.1 - model: inflatebot/MN-12B-Mag-Mell-R1 merge_method: slerp base_model: TheDrummer/UnslopNemo-12B-v4.1 dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] ``` > I made the cover art myself in Photoshop... I don't use AI for stuff like that.