--- base_model: - unsloth/gemma-2-9b-it - princeton-nlp/gemma-2-9b-it-SimPO - wzhouad/gemma-2-9b-it-WPO-HB - nbeerbower/Gemma2-Gutenberg-Doppel-9B library_name: transformers tags: - mergekit - merge --- # Gemma-2-Ataraxy-Doppel-9B One last test model.. that you should ignore again. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della merge method using [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) as a base. ### Models Merged The following models were included in the merge: * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) * [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB) * [nbeerbower/Gemma2-Gutenberg-Doppel-9B](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: unsloth/gemma-2-9b-it dtype: bfloat16 merge_method: della parameters: epsilon: 0.1 int8_mask: 1.0 lambda: 1.0 normalize: 1.0 slices: - sources: - layer_range: [0, 42] model: unsloth/gemma-2-9b-it - layer_range: [0, 42] model: wzhouad/gemma-2-9b-it-WPO-HB parameters: density: 0.55 weight: 0.6 - layer_range: [0, 42] model: princeton-nlp/gemma-2-9b-it-SimPO parameters: density: 0.35 weight: 0.6 - layer_range: [0, 42] model: nbeerbower/Gemma2-Gutenberg-Doppel-9B parameters: density: 0.25 weight: 0.4 ```