--- base_model: - chuanli11/Llama-3.2-3B-Instruct-uncensored - AcademieDuNumerique/Llama-3.2-3B-SQL-Instruct - Atharva26/llama-3.2-3b-mathdaily-chatbot - Diluksha/Llama_3.2_3B_sql_finetuned_full - huihui-ai/Llama-3.2-3B-Instruct-abliterated - bunnycore/Llama-3.2-3B-CodeReactor library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) * [AcademieDuNumerique/Llama-3.2-3B-SQL-Instruct](https://huggingface.co/AcademieDuNumerique/Llama-3.2-3B-SQL-Instruct) * [Atharva26/llama-3.2-3b-mathdaily-chatbot](https://huggingface.co/Atharva26/llama-3.2-3b-mathdaily-chatbot) * [Diluksha/Llama_3.2_3B_sql_finetuned_full](https://huggingface.co/Diluksha/Llama_3.2_3B_sql_finetuned_full) * [bunnycore/Llama-3.2-3B-CodeReactor](https://huggingface.co/bunnycore/Llama-3.2-3B-CodeReactor) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Atharva26/llama-3.2-3b-mathdaily-chatbot parameters: density: 0.5 weight: 0.5 - model: Diluksha/Llama_3.2_3B_sql_finetuned_full parameters: density: 0.5 weight: 0.5 - model: chuanli11/Llama-3.2-3B-Instruct-uncensored parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-CodeReactor parameters: density: 0.5 weight: 0.5 - model: AcademieDuNumerique/Llama-3.2-3B-SQL-Instruct parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated parameters: normalize: false int8_mask: true dtype: float16 ```