Update README.md
Browse files
README.md
CHANGED
@@ -18,11 +18,11 @@ pipeline_tag: text-generation
|
|
18 |
|
19 |
| Metric | gpt2_platypus-dolly-guanaco | GPT-2 (base) |
|
20 |
|-----------------------|-------|-------|
|
21 |
-
| Avg. | 30.18 | 29.9 |
|
22 |
-
| ARC (25-shot) | 23.21 | 21.84 |
|
23 |
-
| HellaSwag (10-shot) | 31.04 | 31.6 |
|
24 |
-
| MMLU (5-shot) | 26.16 | 25.86 |
|
25 |
-
| TruthfulQA (0-shot) | 40.31 | 40.67 |
|
26 |
|
27 |
|
28 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|
|
|
18 |
|
19 |
| Metric | gpt2_platypus-dolly-guanaco | GPT-2 (base) |
|
20 |
|-----------------------|-------|-------|
|
21 |
+
| Avg. | **30.18** | 29.9 |
|
22 |
+
| ARC (25-shot) | **23.21** | 21.84 |
|
23 |
+
| HellaSwag (10-shot) | 31.04 | **31.6** |
|
24 |
+
| MMLU (5-shot) | **26.16** | 25.86 |
|
25 |
+
| TruthfulQA (0-shot) | 40.31 | **40.67** |
|
26 |
|
27 |
|
28 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|