merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: zelk12/MT-Merge1-MAMU-gemma-2-9B
- model: zelk12/MT5-Gen1-MMGBI-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-Merge1-MAMU-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.666666667
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 33.13 |
IFEval (0-Shot) | 78.86 |
BBH (3-Shot) | 44.06 |
MATH Lvl 5 (4-Shot) | 12.69 |
GPQA (0-shot) | 13.53 |
MuSR (0-shot) | 12.15 |
MMLU-PRO (5-shot) | 37.49 |
- Downloads last month
- 16
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for zelk12/MT-Merge1-gemma-2-9B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard78.860
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard44.060
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard12.690
- acc_norm on GPQA (0-shot)Open LLM Leaderboard13.530
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.150
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard37.490