Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.35
ARC (25-shot) 22.27
HellaSwag (10-shot) 28.99
MMLU (5-shot) 26.62
TruthfulQA (0-shot) 41.71
Winogrande (5-shot) 52.72
GSM8K (5-shot) 0.23
DROP (3-shot) 4.93
Downloads last month
1,684
Safetensors
Model size
315M params
Tensor type
BF16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using Corianas/256_5epoch 26