|
--- |
|
base_model: |
|
- Bllossom/llama-3.2-Korean-Bllossom-3B |
|
- CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct |
|
- EpistemeAI/Llama-3.2-3B-Agent007 |
|
- Saxo/Linkbricks-Llama3.2-Korean-cpt-3b |
|
- RyanYr/llama32-3b-it_CoT-it_SFT |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [EpistemeAI/Llama-3.2-3B-Agent007](https://huggingface.co/EpistemeAI/Llama-3.2-3B-Agent007) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [Bllossom/llama-3.2-Korean-Bllossom-3B](https://huggingface.co/Bllossom/llama-3.2-Korean-Bllossom-3B) |
|
* [CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct](https://huggingface.co/CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct) |
|
* [Saxo/Linkbricks-Llama3.2-Korean-cpt-3b](https://huggingface.co/Saxo/Linkbricks-Llama3.2-Korean-cpt-3b) |
|
* [RyanYr/llama32-3b-it_CoT-it_SFT](https://huggingface.co/RyanYr/llama32-3b-it_CoT-it_SFT) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
merge_method: model_stock |
|
base_model: EpistemeAI/Llama-3.2-3B-Agent007 |
|
dtype: float16 |
|
parameters: |
|
filter_wise: false |
|
weight: 1 |
|
density: 0.42 |
|
gamma: 0.03 |
|
models: |
|
- model: EpistemeAI/Llama-3.2-3B-Agent007 |
|
layer_range: [0, 28] |
|
- model: Bllossom/llama-3.2-Korean-Bllossom-3B |
|
layer_range: [0, 28] |
|
- model: CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct |
|
layer_range: [0, 28] |
|
- model: Saxo/Linkbricks-Llama3.2-Korean-cpt-3b |
|
layer_range: [0, 28] |
|
- model: RyanYr/llama32-3b-it_CoT-it_SFT |
|
layer_range: [0, 28] |
|
``` |
|
|