Adding the Open Portuguese LLM Leaderboard Evaluation Results

#6
Files changed (1) hide show
  1. README.md +170 -4
README.md CHANGED
@@ -3,10 +3,157 @@ license: other
3
  license_name: yi-license
4
  license_link: LICENSE
5
  widget:
6
- - text: "你好! 你叫什么名字!"
7
- output:
8
- text: "你好,我的名字叫聚言,很高兴见到你。"
9
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- markdownlint-disable first-line-h1 -->
@@ -188,4 +335,23 @@ OrionStar-Yi-34B-Chat 开源模型而导致的任何问题,包括但不限于
188
 
189
  <div align="center">
190
  <img src="./pics/wechat_group.jpg" alt="wechat" width="40%" />
191
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: yi-license
4
  license_link: LICENSE
5
  widget:
6
+ - text: 你好! 你叫什么名字!
7
+ output:
8
+ text: 你好,我的名字叫聚言,很高兴见到你。
9
  pipeline_tag: text-generation
10
+ model-index:
11
+ - name: OrionStar-Yi-34B-Chat
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: ENEM Challenge (No Images)
18
+ type: eduagarcia/enem_challenge
19
+ split: train
20
+ args:
21
+ num_few_shot: 3
22
+ metrics:
23
+ - type: acc
24
+ value: 70.4
25
+ name: accuracy
26
+ source:
27
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
28
+ name: Open Portuguese LLM Leaderboard
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: BLUEX (No Images)
34
+ type: eduagarcia-temp/BLUEX_without_images
35
+ split: train
36
+ args:
37
+ num_few_shot: 3
38
+ metrics:
39
+ - type: acc
40
+ value: 62.03
41
+ name: accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
44
+ name: Open Portuguese LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: OAB Exams
50
+ type: eduagarcia/oab_exams
51
+ split: train
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc
56
+ value: 50.66
57
+ name: accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
60
+ name: Open Portuguese LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: Assin2 RTE
66
+ type: assin2
67
+ split: test
68
+ args:
69
+ num_few_shot: 15
70
+ metrics:
71
+ - type: f1_macro
72
+ value: 91.71
73
+ name: f1-macro
74
+ source:
75
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
76
+ name: Open Portuguese LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: Assin2 STS
82
+ type: eduagarcia/portuguese_benchmark
83
+ split: test
84
+ args:
85
+ num_few_shot: 15
86
+ metrics:
87
+ - type: pearson
88
+ value: 79.69
89
+ name: pearson
90
+ source:
91
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
92
+ name: Open Portuguese LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: FaQuAD NLI
98
+ type: ruanchaves/faquad-nli
99
+ split: test
100
+ args:
101
+ num_few_shot: 15
102
+ metrics:
103
+ - type: f1_macro
104
+ value: 78.4
105
+ name: f1-macro
106
+ source:
107
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
108
+ name: Open Portuguese LLM Leaderboard
109
+ - task:
110
+ type: text-generation
111
+ name: Text Generation
112
+ dataset:
113
+ name: HateBR Binary
114
+ type: ruanchaves/hatebr
115
+ split: test
116
+ args:
117
+ num_few_shot: 25
118
+ metrics:
119
+ - type: f1_macro
120
+ value: 84.44
121
+ name: f1-macro
122
+ source:
123
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
124
+ name: Open Portuguese LLM Leaderboard
125
+ - task:
126
+ type: text-generation
127
+ name: Text Generation
128
+ dataset:
129
+ name: PT Hate Speech Binary
130
+ type: hate_speech_portuguese
131
+ split: test
132
+ args:
133
+ num_few_shot: 25
134
+ metrics:
135
+ - type: f1_macro
136
+ value: 65.91
137
+ name: f1-macro
138
+ source:
139
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
140
+ name: Open Portuguese LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: tweetSentBR
146
+ type: eduagarcia/tweetsentbr_fewshot
147
+ split: test
148
+ args:
149
+ num_few_shot: 25
150
+ metrics:
151
+ - type: f1_macro
152
+ value: 72.62
153
+ name: f1-macro
154
+ source:
155
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=OrionStarAI/OrionStar-Yi-34B-Chat
156
+ name: Open Portuguese LLM Leaderboard
157
  ---
158
 
159
  <!-- markdownlint-disable first-line-h1 -->
 
335
 
336
  <div align="center">
337
  <img src="./pics/wechat_group.jpg" alt="wechat" width="40%" />
338
+ </div>
339
+
340
+
341
+ # Open Portuguese LLM Leaderboard Evaluation Results
342
+
343
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/OrionStarAI/OrionStar-Yi-34B-Chat) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
344
+
345
+ | Metric | Value |
346
+ |--------------------------|---------|
347
+ |Average |**72.87**|
348
+ |ENEM Challenge (No Images)| 70.40|
349
+ |BLUEX (No Images) | 62.03|
350
+ |OAB Exams | 50.66|
351
+ |Assin2 RTE | 91.71|
352
+ |Assin2 STS | 79.69|
353
+ |FaQuAD NLI | 78.40|
354
+ |HateBR Binary | 84.44|
355
+ |PT Hate Speech Binary | 65.91|
356
+ |tweetSentBR | 72.62|
357
+