jaLLAbi2-7b / README.md
solankibhargav's picture
Upload folder using huggingface_hub
e51c2fd verified
|
raw
history blame
1.42 kB
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- bardsai/jaskier-7b-dpo-v5.6
- AbacusResearch/haLLAwa3
- cognitivecomputations/WestLake-7B-v2-laser
---
# jaLLAbi2-7b
jaLLAbi2-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
* [AbacusResearch/haLLAwa3](https://huggingface.co/AbacusResearch/haLLAwa3)
* [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
## 🧩 Configuration
\```yaml
models:
- model: eren23/ogno-monarch-jaskier-merge-7b
# No parameters necessary for base model
- model: FelixChao/WestSeverus-7B-DPO-v2
#Emphasize the beginning of Vicuna format models
parameters:
weight: 0.2
density: 0.59
- model: bardsai/jaskier-7b-dpo-v5.6
parameters:
weight: 0.2
density: 0.55
# Vicuna format
- model: AbacusResearch/haLLAwa3
parameters:
weight: 0.3
density: 0.55
- model: cognitivecomputations/WestLake-7B-v2-laser
parameters:
weight: 0.3
density: 0.55
merge_method: dare_ties
base_model: eren23/ogno-monarch-jaskier-merge-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
\```