leaderboard-pr-bot commited on
Commit
4ba00d0
1 Parent(s): d09f1f8

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +121 -5
README.md CHANGED
@@ -1,12 +1,115 @@
1
  ---
 
 
 
 
2
  datasets:
3
  - emozilla/yarn-train-tokenized-16k-mistral
4
  metrics:
5
  - perplexity
6
- library_name: transformers
7
- license: apache-2.0
8
- language:
9
- - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  # Model Card: Nous-Yarn-Mistral-7b-128k
@@ -59,4 +162,17 @@ Short context benchmarks showing that quality degradation is minimal:
59
  - [honglu2875](https://github.com/honglu2875): Paper and evals
60
 
61
  The authors would like to thank LAION AI for their support of compute for this model.
62
- It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
  datasets:
7
  - emozilla/yarn-train-tokenized-16k-mistral
8
  metrics:
9
  - perplexity
10
+ model-index:
11
+ - name: Yarn-Mistral-7b-128k
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: AI2 Reasoning Challenge (25-Shot)
18
+ type: ai2_arc
19
+ config: ARC-Challenge
20
+ split: test
21
+ args:
22
+ num_few_shot: 25
23
+ metrics:
24
+ - type: acc_norm
25
+ value: 59.64
26
+ name: normalized accuracy
27
+ source:
28
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NousResearch/Yarn-Mistral-7b-128k
29
+ name: Open LLM Leaderboard
30
+ - task:
31
+ type: text-generation
32
+ name: Text Generation
33
+ dataset:
34
+ name: HellaSwag (10-Shot)
35
+ type: hellaswag
36
+ split: validation
37
+ args:
38
+ num_few_shot: 10
39
+ metrics:
40
+ - type: acc_norm
41
+ value: 82.5
42
+ name: normalized accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NousResearch/Yarn-Mistral-7b-128k
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MMLU (5-Shot)
51
+ type: cais/mmlu
52
+ config: all
53
+ split: test
54
+ args:
55
+ num_few_shot: 5
56
+ metrics:
57
+ - type: acc
58
+ value: 63.02
59
+ name: accuracy
60
+ source:
61
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NousResearch/Yarn-Mistral-7b-128k
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: TruthfulQA (0-shot)
68
+ type: truthful_qa
69
+ config: multiple_choice
70
+ split: validation
71
+ args:
72
+ num_few_shot: 0
73
+ metrics:
74
+ - type: mc2
75
+ value: 41.78
76
+ source:
77
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NousResearch/Yarn-Mistral-7b-128k
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: Winogrande (5-shot)
84
+ type: winogrande
85
+ config: winogrande_xl
86
+ split: validation
87
+ args:
88
+ num_few_shot: 5
89
+ metrics:
90
+ - type: acc
91
+ value: 76.95
92
+ name: accuracy
93
+ source:
94
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NousResearch/Yarn-Mistral-7b-128k
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: GSM8k (5-shot)
101
+ type: gsm8k
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 32.6
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NousResearch/Yarn-Mistral-7b-128k
112
+ name: Open LLM Leaderboard
113
  ---
114
 
115
  # Model Card: Nous-Yarn-Mistral-7b-128k
 
162
  - [honglu2875](https://github.com/honglu2875): Paper and evals
163
 
164
  The authors would like to thank LAION AI for their support of compute for this model.
165
+ It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.
166
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
167
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-128k)
168
+
169
+ | Metric |Value|
170
+ |---------------------------------|----:|
171
+ |Avg. |59.42|
172
+ |AI2 Reasoning Challenge (25-Shot)|59.64|
173
+ |HellaSwag (10-Shot) |82.50|
174
+ |MMLU (5-Shot) |63.02|
175
+ |TruthfulQA (0-shot) |41.78|
176
+ |Winogrande (5-shot) |76.95|
177
+ |GSM8k (5-shot) |32.60|
178
+