ank028's picture
Upload folder using huggingface_hub
acd6b09 verified
---
base_model:
- autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1
- ank028/Llama-3.2-1B-Instruct-commonsense_qa
library_name: transformers
tags:
- mergekit
- merge
---
# c_l
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1](https://huggingface.co/autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1)
* [ank028/Llama-3.2-1B-Instruct-commonsense_qa](https://huggingface.co/ank028/Llama-3.2-1B-Instruct-commonsense_qa)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ank028/Llama-3.2-1B-Instruct-commonsense_qa
layer_range: [0, 16]
- model: autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1
layer_range: [0, 16]
merge_method: slerp
base_model: ank028/Llama-3.2-1B-Instruct-commonsense_qa
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
name: Llama-3.2-1B-Instruct-commonsense_qa-MGSM8K-sft1-slerp
```