pedrogengo leaderboard-pr-bot commited on
Commit
d632a1c
1 Parent(s): fc2a2de

Adding Evaluation Results (#8)

Browse files

- Adding Evaluation Results (cdea28de8eab555e22a96bc8fef329d6b2f2c8f4)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,8 +1,111 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - pt
5
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
  The Cabrita model is a collection of continued pre-trained and tokenizer-adapted models for the Portuguese language.
8
  This artifact is the 3 billion size variant.
@@ -19,4 +122,17 @@ open_llama_3b option.
19
  archivePrefix={arXiv},
20
  primaryClass={cs.CL}
21
  }
22
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - pt
4
  - en
5
+ license: apache-2.0
6
+ model-index:
7
+ - name: open-cabrita3b
8
+ results:
9
+ - task:
10
+ type: text-generation
11
+ name: Text Generation
12
+ dataset:
13
+ name: AI2 Reasoning Challenge (25-Shot)
14
+ type: ai2_arc
15
+ config: ARC-Challenge
16
+ split: test
17
+ args:
18
+ num_few_shot: 25
19
+ metrics:
20
+ - type: acc_norm
21
+ value: 33.79
22
+ name: normalized accuracy
23
+ source:
24
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=22h/open-cabrita3b
25
+ name: Open LLM Leaderboard
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: HellaSwag (10-Shot)
31
+ type: hellaswag
32
+ split: validation
33
+ args:
34
+ num_few_shot: 10
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 55.35
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=22h/open-cabrita3b
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: MMLU (5-Shot)
47
+ type: cais/mmlu
48
+ config: all
49
+ split: test
50
+ args:
51
+ num_few_shot: 5
52
+ metrics:
53
+ - type: acc
54
+ value: 25.16
55
+ name: accuracy
56
+ source:
57
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=22h/open-cabrita3b
58
+ name: Open LLM Leaderboard
59
+ - task:
60
+ type: text-generation
61
+ name: Text Generation
62
+ dataset:
63
+ name: TruthfulQA (0-shot)
64
+ type: truthful_qa
65
+ config: multiple_choice
66
+ split: validation
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: mc2
71
+ value: 38.5
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=22h/open-cabrita3b
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: Winogrande (5-shot)
80
+ type: winogrande
81
+ config: winogrande_xl
82
+ split: validation
83
+ args:
84
+ num_few_shot: 5
85
+ metrics:
86
+ - type: acc
87
+ value: 59.43
88
+ name: accuracy
89
+ source:
90
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=22h/open-cabrita3b
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: GSM8k (5-shot)
97
+ type: gsm8k
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 0.99
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=22h/open-cabrita3b
108
+ name: Open LLM Leaderboard
109
  ---
110
  The Cabrita model is a collection of continued pre-trained and tokenizer-adapted models for the Portuguese language.
111
  This artifact is the 3 billion size variant.
 
122
  archivePrefix={arXiv},
123
  primaryClass={cs.CL}
124
  }
125
+ ```
126
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
127
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_22h__open-cabrita3b)
128
+
129
+ | Metric |Value|
130
+ |---------------------------------|----:|
131
+ |Avg. |35.54|
132
+ |AI2 Reasoning Challenge (25-Shot)|33.79|
133
+ |HellaSwag (10-Shot) |55.35|
134
+ |MMLU (5-Shot) |25.16|
135
+ |TruthfulQA (0-shot) |38.50|
136
+ |Winogrande (5-shot) |59.43|
137
+ |GSM8k (5-shot) | 0.99|
138
+