--- base_model: - unsloth/DeepSeek-R1-Distill-Llama-70B - mergekit-community/L3.3-L3.1-NewTempusBlated-70B - Nohobby/AbominationSnowPig - SicariusSicariiStuff/Negative_LLAMA_70B - ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [mergekit-community/L3.3-L3.1-NewTempusBlated-70B](https://huggingface.co/mergekit-community/L3.3-L3.1-NewTempusBlated-70B) as a base. ### Models Merged The following models were included in the merge: * [unsloth/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B) * [Nohobby/AbominationSnowPig](https://huggingface.co/Nohobby/AbominationSnowPig) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: unsloth/DeepSeek-R1-Distill-Llama-70B - model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 parameters: select_topk: - value: [0.18, 0.3, 0.32, 0.38, 0.32, 0.3] - model: Nohobby/AbominationSnowPig parameters: select_topk: - value: [0.1, 0.06, 0.05, 0.05, 0.08] - model: SicariusSicariiStuff/Negative_LLAMA_70B parameters: select_topk: 0.17 - model: mergekit-community/L3.3-L3.1-NewTempusBlated-70B parameters: select_topk: 0.55 base_model: mergekit-community/L3.3-L3.1-NewTempusBlated-70B merge_method: sce parameters: int8_mask: true rescale: true normalize: true dtype: float32 out_dtype: bfloat16 tokenizer_source: base ```