Adding Evaluation Results
#4
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -27,3 +27,17 @@ PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml
|
|
27 |
CURRENT MMLU: 50.36
|
28 |
|
29 |
Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
CURRENT MMLU: 50.36
|
28 |
|
29 |
Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
|
30 |
+
|
31 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
32 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vonjack__Qwen-LLaMAfied-HFTok-7B-Chat)
|
33 |
+
|
34 |
+
| Metric | Value |
|
35 |
+
|-----------------------|---------------------------|
|
36 |
+
| Avg. | 48.51 |
|
37 |
+
| ARC (25-shot) | 50.51 |
|
38 |
+
| HellaSwag (10-shot) | 83.65 |
|
39 |
+
| MMLU (5-shot) | 51.53 |
|
40 |
+
| TruthfulQA (0-shot) | 44.23 |
|
41 |
+
| Winogrande (5-shot) | 71.43 |
|
42 |
+
| GSM8K (5-shot) | 2.5 |
|
43 |
+
| DROP (3-shot) | 35.7 |
|