metadata
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- bardsai/jaskier-7b-dpo-v5.6
- AbacusResearch/haLLAwa3
- cognitivecomputations/WestLake-7B-v2-laser
jaLLAbi2-7b
jaLLAbi2-7b is a merge of the following models using mergekit:
- FelixChao/WestSeverus-7B-DPO-v2
- bardsai/jaskier-7b-dpo-v5.6
- AbacusResearch/haLLAwa3
- cognitivecomputations/WestLake-7B-v2-laser
🧩 Configuration
```yaml models:
- model: eren23/ogno-monarch-jaskier-merge-7b
No parameters necessary for base model
- model: FelixChao/WestSeverus-7B-DPO-v2 #Emphasize the beginning of Vicuna format models parameters: weight: 0.2 density: 0.59
- model: bardsai/jaskier-7b-dpo-v5.6 parameters: weight: 0.2 density: 0.55
Vicuna format
- model: AbacusResearch/haLLAwa3 parameters: weight: 0.3 density: 0.55
- model: cognitivecomputations/WestLake-7B-v2-laser parameters: weight: 0.3 density: 0.55
merge_method: dare_ties base_model: eren23/ogno-monarch-jaskier-merge-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ```