Apollo-exp-8B / README.md
rootxhacker's picture
Update README.md
78d9935 verified
---
base_model:
- allenai/Llama-3.1-Tulu-3-8B-SFT
- huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
- NousResearch/DeepHermes-3-Llama-3-8B-Preview
- allenai/Llama-3.1-Tulu-3-8B
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- allenai/Llama-3.1-Tulu-3-8B-DPO
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [allenai/Llama-3.1-Tulu-3-8B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT)
* [huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated)
* [NousResearch/DeepHermes-3-Llama-3-8B-Preview](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview)
* [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
* [allenai/Llama-3.1-Tulu-3-8B-DPO](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
- model: allenai/Llama-3.1-Tulu-3-8B
- model: allenai/Llama-3.1-Tulu-3-8B-SFT
- model: allenai/Llama-3.1-Tulu-3-8B-DPO
- model: NousResearch/DeepHermes-3-Llama-3-8B-Preview
merge_method: model_stock
base_model: allenai/Llama-3.1-Tulu-3-8B
normalize: true
int8_mask: true
dtype: bfloat16
```