here be dragons
Collection
32 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using FuseAI/FuseChat-Llama-3.1-8B-SFT as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: refuelai/Llama-3-Refueled
- model: johnsutor/Llama-3-8B-Instruct_dare_ties-density-0.9
- model: Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base
- model: DreadPoor/Derivative-8B-Model_Stock
merge_method: sce
base_model: FuseAI/FuseChat-Llama-3.1-8B-SFT
parameters:
select_topk: 0.3
dtype: bfloat16
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 26.62 |
IFEval (0-Shot) | 65.15 |
BBH (3-Shot) | 37.76 |
MATH Lvl 5 (4-Shot) | 7.40 |
GPQA (0-shot) | 5.03 |
MuSR (0-shot) | 17.33 |
MMLU-PRO (5-shot) | 27.06 |