Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -30,4 +30,17 @@ I would like to thank AutoMeta for providing me with the computing power necessa
|
|
30 |
### Prompt Template
|
31 |
```
|
32 |
### Human: {prompt} ### Assistant:
|
33 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
### Prompt Template
|
31 |
```
|
32 |
### Human: {prompt} ### Assistant:
|
33 |
+
```
|
34 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
35 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__Guanaco-3B-Uncensored)
|
36 |
+
|
37 |
+
| Metric | Value |
|
38 |
+
|-----------------------|---------------------------|
|
39 |
+
| Avg. | 34.18 |
|
40 |
+
| ARC (25-shot) | 42.49 |
|
41 |
+
| HellaSwag (10-shot) | 66.99 |
|
42 |
+
| MMLU (5-shot) | 25.55 |
|
43 |
+
| TruthfulQA (0-shot) | 34.71 |
|
44 |
+
| Winogrande (5-shot) | 63.38 |
|
45 |
+
| GSM8K (5-shot) | 0.53 |
|
46 |
+
| DROP (3-shot) | 5.62 |
|