I have no idea what I'm doing

Anyways I finetune Llama 2 7b base hf with Guanaco Unfiltered dataset

It's probably horrible

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 44.06
ARC (25-shot) 52.22
HellaSwag (10-shot) 79.08
MMLU (5-shot) 46.63
TruthfulQA (0-shot) 42.97
Winogrande (5-shot) 74.51
GSM8K (5-shot) 7.28
DROP (3-shot) 5.75
Downloads last month
1,144
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train Lazycuber/L2-7b-Base-Guanaco-Uncensored