leaderboard-pr-bot's picture
Adding Evaluation Results
f04c14b
|
raw
history blame
748 Bytes
metadata
license: apache-2.0

pythia-1.4b-deduped model finetuned on sharegpt data

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 30.79
ARC (25-shot) 34.3
HellaSwag (10-shot) 54.49
MMLU (5-shot) 24.0
TruthfulQA (0-shot) 41.81
Winogrande (5-shot) 55.25
GSM8K (5-shot) 0.83
DROP (3-shot) 4.88