--- base_model: - Nohobby/L3.3-Prikol-70B-EXTRA - Steelskull/L3.3-Electra-R1-70b - unsloth/Llama-3.3-70B-Instruct - KaraKaraWitch/Llama-3.3-MagicalGirl-2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [unsloth/Llama-3.3-70B-Instruct](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Nohobby/L3.3-Prikol-70B-EXTRA](https://huggingface.co/Nohobby/L3.3-Prikol-70B-EXTRA) * [Steelskull/L3.3-Electra-R1-70b](https://huggingface.co/Steelskull/L3.3-Electra-R1-70b) * [KaraKaraWitch/Llama-3.3-MagicalGirl-2](https://huggingface.co/KaraKaraWitch/Llama-3.3-MagicalGirl-2) ### Configuration The following YAML configuration was used to produce this model: ```yaml # А ведь я мог, ну, не знаю, на улицу выйти. # Не думаю что из этого выйдет что-то хорошее, но мои руки слишком чешутся. # Привет, кстати! base_model: unsloth/Llama-3.3-70B-Instruct merge_method: della dtype: bfloat16 models: - model: Nohobby/L3.3-Prikol-70B-EXTRA parameters: weight: 1.0 - model: Steelskull/L3.3-Electra-R1-70b parameters: weight: 1.0 - model: KaraKaraWitch/Llama-3.3-MagicalGirl-2 parameters: weight: 1.0 - model: unsloth/Llama-3.3-70B-Instruct parameters: weight: 1.0 ```