Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +122 -6
README.md CHANGED
@@ -1,15 +1,118 @@
1
  ---
 
 
 
 
 
2
  datasets:
3
  - Intel/orca_dpo_pairs
4
  - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
5
  - Open-Orca/SlimOrca
6
  - MinervaAI/Aesir-Preview
7
  - allenai/ultrafeedback_binarized_cleaned
8
- license: apache-2.0
9
- language:
10
- - en
11
- tags:
12
- - not-for-all-audiences
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
  ### TeeZee/NEBULA-23B-v1.0 ###
15
 
@@ -23,4 +126,17 @@ tags:
23
  - for evaluation of RP and ERP more tests are needed
24
 
25
  ### Prompt template
26
- - Alpaca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - not-for-all-audiences
7
  datasets:
8
  - Intel/orca_dpo_pairs
9
  - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
10
  - Open-Orca/SlimOrca
11
  - MinervaAI/Aesir-Preview
12
  - allenai/ultrafeedback_binarized_cleaned
13
+ model-index:
14
+ - name: NEBULA-23B-v1.0
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: AI2 Reasoning Challenge (25-Shot)
21
+ type: ai2_arc
22
+ config: ARC-Challenge
23
+ split: test
24
+ args:
25
+ num_few_shot: 25
26
+ metrics:
27
+ - type: acc_norm
28
+ value: 66.72
29
+ name: normalized accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-23B-v1.0
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: HellaSwag (10-Shot)
38
+ type: hellaswag
39
+ split: validation
40
+ args:
41
+ num_few_shot: 10
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 86.98
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-23B-v1.0
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MMLU (5-Shot)
54
+ type: cais/mmlu
55
+ config: all
56
+ split: test
57
+ args:
58
+ num_few_shot: 5
59
+ metrics:
60
+ - type: acc
61
+ value: 65.4
62
+ name: accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-23B-v1.0
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: TruthfulQA (0-shot)
71
+ type: truthful_qa
72
+ config: multiple_choice
73
+ split: validation
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: mc2
78
+ value: 57.6
79
+ source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-23B-v1.0
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: Winogrande (5-shot)
87
+ type: winogrande
88
+ config: winogrande_xl
89
+ split: validation
90
+ args:
91
+ num_few_shot: 5
92
+ metrics:
93
+ - type: acc
94
+ value: 82.95
95
+ name: accuracy
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-23B-v1.0
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: GSM8k (5-shot)
104
+ type: gsm8k
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 0.0
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-23B-v1.0
115
+ name: Open LLM Leaderboard
116
  ---
117
  ### TeeZee/NEBULA-23B-v1.0 ###
118
 
 
126
  - for evaluation of RP and ERP more tests are needed
127
 
128
  ### Prompt template
129
+ - Alpaca
130
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
131
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__NEBULA-23B-v1.0)
132
+
133
+ | Metric |Value|
134
+ |---------------------------------|----:|
135
+ |Avg. |59.94|
136
+ |AI2 Reasoning Challenge (25-Shot)|66.72|
137
+ |HellaSwag (10-Shot) |86.98|
138
+ |MMLU (5-Shot) |65.40|
139
+ |TruthfulQA (0-shot) |57.60|
140
+ |Winogrande (5-shot) |82.95|
141
+ |GSM8k (5-shot) | 0.00|
142
+