merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Saxo/Linkbricks-Horizon-AI-Korean-Superb-27B
- model: nbeerbower/gemma2-gutenberg-27B
merge_method: slerp
base_model: Saxo/Linkbricks-Horizon-AI-Korean-Superb-27B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 36.65 |
IFEval (0-Shot) | 74.97 |
BBH (3-Shot) | 50.77 |
MATH Lvl 5 (4-Shot) | 22.89 |
GPQA (0-shot) | 15.55 |
MuSR (0-shot) | 15.18 |
MMLU-PRO (5-shot) | 40.55 |
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for allknowingroger/Gemma2Slerp4-27B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard74.970
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard50.770
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard22.890
- acc_norm on GPQA (0-shot)Open LLM Leaderboard15.550
- acc_norm on MuSR (0-shot)Open LLM Leaderboard15.180
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard40.550