--- base_model: - jayavibhav/llama3.2_1b_CoT - huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged - ank028/Llama-3.2-1B-Instruct-medmcqa - autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1 - MLking2/llama-3.2-1b-medical - unsloth/Llama-3.2-1B-Instruct-bnb-4bit - qzhang-2024/Llama-3.2-1B-pre-trained - Alelcv27/llama3.2-1b-math-code - student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09 - autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear - meta-llama/Llama-3.2-1B-Instruct - meta-llama/Llama-3.2-1B - huyhoangt2201/llama-3.2-1b-chat-sql3-merged library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [jayavibhav/llama3.2_1b_CoT](https://huggingface.co/jayavibhav/llama3.2_1b_CoT) * [huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged](https://huggingface.co/huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged) * [ank028/Llama-3.2-1B-Instruct-medmcqa](https://huggingface.co/ank028/Llama-3.2-1B-Instruct-medmcqa) * [autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1](https://huggingface.co/autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1) * [MLking2/llama-3.2-1b-medical](https://huggingface.co/MLking2/llama-3.2-1b-medical) * [unsloth/Llama-3.2-1B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit) * [qzhang-2024/Llama-3.2-1B-pre-trained](https://huggingface.co/qzhang-2024/Llama-3.2-1B-pre-trained) * [Alelcv27/llama3.2-1b-math-code](https://huggingface.co/Alelcv27/llama3.2-1b-math-code) * [student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09](https://huggingface.co/student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09) * [autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear](https://huggingface.co/autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear) * [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) * [huyhoangt2201/llama-3.2-1b-chat-sql3-merged](https://huggingface.co/huyhoangt2201/llama-3.2-1b-chat-sql3-merged) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: ties architectures: ["transformer"] base_model: meta-llama/Llama-3.2-1B-Instruct models: - model: Alelcv27/llama3.2-1b-math-code - model: huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged - model: autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1 - model: meta-llama/Llama-3.2-1B-Instruct - model: autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear - model: meta-llama/Llama-3.2-1B - model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit - model: MLking2/llama-3.2-1b-medical - model: jayavibhav/llama3.2_1b_CoT - model: huyhoangt2201/llama-3.2-1b-chat-sql3-merged - model: student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09 - model: qzhang-2024/Llama-3.2-1B-pre-trained - model: ank028/Llama-3.2-1B-Instruct-medmcqa parameters: density: 0.5 weight: 1.0 int8_mask: true normalize: true ```