Adding the Open Portuguese LLM Leaderboard Evaluation Results

#7
Files changed (1) hide show
  1. README.md +167 -1
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
  license: apache-2.0
3
- base_model: mistralai/Mistral-Nemo-Base-2407
4
  tags:
5
  - generated_from_trainer
6
  - axolotl
 
7
  datasets:
8
  - cognitivecomputations/Dolphin-2.9
9
  - teknium/OpenHermes-2.5
@@ -13,6 +13,153 @@ datasets:
13
  - microsoft/orca-math-word-problems-200k
14
  - Locutusque/function-calling-chatml
15
  - internlm/Agent-FLAN
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
  # Dolphin 2.9.3 Mistral Nemo 12b 🐬
@@ -483,3 +630,22 @@ The following hyperparameters were used during training:
483
  - Pytorch 2.2.2+cu121
484
  - Datasets 2.19.1
485
  - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  - axolotl
6
+ base_model: mistralai/Mistral-Nemo-Base-2407
7
  datasets:
8
  - cognitivecomputations/Dolphin-2.9
9
  - teknium/OpenHermes-2.5
 
13
  - microsoft/orca-math-word-problems-200k
14
  - Locutusque/function-calling-chatml
15
  - internlm/Agent-FLAN
16
+ model-index:
17
+ - name: dolphin-2.9.3-mistral-nemo-12b
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: ENEM Challenge (No Images)
24
+ type: eduagarcia/enem_challenge
25
+ split: train
26
+ args:
27
+ num_few_shot: 3
28
+ metrics:
29
+ - type: acc
30
+ value: 72.08
31
+ name: accuracy
32
+ source:
33
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
34
+ name: Open Portuguese LLM Leaderboard
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: BLUEX (No Images)
40
+ type: eduagarcia-temp/BLUEX_without_images
41
+ split: train
42
+ args:
43
+ num_few_shot: 3
44
+ metrics:
45
+ - type: acc
46
+ value: 62.45
47
+ name: accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
50
+ name: Open Portuguese LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: OAB Exams
56
+ type: eduagarcia/oab_exams
57
+ split: train
58
+ args:
59
+ num_few_shot: 3
60
+ metrics:
61
+ - type: acc
62
+ value: 52.71
63
+ name: accuracy
64
+ source:
65
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
66
+ name: Open Portuguese LLM Leaderboard
67
+ - task:
68
+ type: text-generation
69
+ name: Text Generation
70
+ dataset:
71
+ name: Assin2 RTE
72
+ type: assin2
73
+ split: test
74
+ args:
75
+ num_few_shot: 15
76
+ metrics:
77
+ - type: f1_macro
78
+ value: 93.08
79
+ name: f1-macro
80
+ source:
81
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
82
+ name: Open Portuguese LLM Leaderboard
83
+ - task:
84
+ type: text-generation
85
+ name: Text Generation
86
+ dataset:
87
+ name: Assin2 STS
88
+ type: eduagarcia/portuguese_benchmark
89
+ split: test
90
+ args:
91
+ num_few_shot: 15
92
+ metrics:
93
+ - type: pearson
94
+ value: 80.83
95
+ name: pearson
96
+ source:
97
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
98
+ name: Open Portuguese LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: FaQuAD NLI
104
+ type: ruanchaves/faquad-nli
105
+ split: test
106
+ args:
107
+ num_few_shot: 15
108
+ metrics:
109
+ - type: f1_macro
110
+ value: 81.28
111
+ name: f1-macro
112
+ source:
113
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
114
+ name: Open Portuguese LLM Leaderboard
115
+ - task:
116
+ type: text-generation
117
+ name: Text Generation
118
+ dataset:
119
+ name: HateBR Binary
120
+ type: ruanchaves/hatebr
121
+ split: test
122
+ args:
123
+ num_few_shot: 25
124
+ metrics:
125
+ - type: f1_macro
126
+ value: 85.85
127
+ name: f1-macro
128
+ source:
129
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
130
+ name: Open Portuguese LLM Leaderboard
131
+ - task:
132
+ type: text-generation
133
+ name: Text Generation
134
+ dataset:
135
+ name: PT Hate Speech Binary
136
+ type: hate_speech_portuguese
137
+ split: test
138
+ args:
139
+ num_few_shot: 25
140
+ metrics:
141
+ - type: f1_macro
142
+ value: 73.07
143
+ name: f1-macro
144
+ source:
145
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
146
+ name: Open Portuguese LLM Leaderboard
147
+ - task:
148
+ type: text-generation
149
+ name: Text Generation
150
+ dataset:
151
+ name: tweetSentBR
152
+ type: eduagarcia/tweetsentbr_fewshot
153
+ split: test
154
+ args:
155
+ num_few_shot: 25
156
+ metrics:
157
+ - type: f1_macro
158
+ value: 72.36
159
+ name: f1-macro
160
+ source:
161
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
162
+ name: Open Portuguese LLM Leaderboard
163
  ---
164
 
165
  # Dolphin 2.9.3 Mistral Nemo 12b 🐬
 
630
  - Pytorch 2.2.2+cu121
631
  - Datasets 2.19.1
632
  - Tokenizers 0.19.1
633
+
634
+
635
+ # Open Portuguese LLM Leaderboard Evaluation Results
636
+
637
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b) and on the [πŸš€ Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
638
+
639
+ | Metric | Value |
640
+ |--------------------------|---------|
641
+ |Average |**74.86**|
642
+ |ENEM Challenge (No Images)| 72.08|
643
+ |BLUEX (No Images) | 62.45|
644
+ |OAB Exams | 52.71|
645
+ |Assin2 RTE | 93.08|
646
+ |Assin2 STS | 80.83|
647
+ |FaQuAD NLI | 81.28|
648
+ |HateBR Binary | 85.85|
649
+ |PT Hate Speech Binary | 73.07|
650
+ |tweetSentBR | 72.36|
651
+