Files changed (1) hide show
  1. README.md +12 -15
README.md CHANGED
@@ -131,9 +131,18 @@ This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve th
131
 
132
  All GGUF models are available here: [MaziyarPanahi/calme-2.3-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.3-qwen2-7b-GGUF)
133
 
134
- # ๐Ÿ† [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
 
135
 
136
- coming soon!
 
 
 
 
 
 
 
 
137
 
138
 
139
  # Prompt Template
@@ -174,16 +183,4 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
174
  tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.3-qwen2-7b")
175
  model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.3-qwen2-7b")
176
  ```
177
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
178
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.3-qwen2-7b)
179
-
180
- | Metric |Value|
181
- |-------------------|----:|
182
- |Avg. |22.74|
183
- |IFEval (0-Shot) |38.25|
184
- |BBH (3-Shot) |30.96|
185
- |MATH Lvl 5 (4-Shot)|18.66|
186
- |GPQA (0-shot) | 6.26|
187
- |MuSR (0-shot) |13.31|
188
- |MMLU-PRO (5-shot) |29.01|
189
-
 
131
 
132
  All GGUF models are available here: [MaziyarPanahi/calme-2.3-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.3-qwen2-7b-GGUF)
133
 
134
+ # ๐Ÿ† [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
135
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.3-qwen2-7b)
136
 
137
+ | Metric |Value|
138
+ |-------------------|----:|
139
+ |Avg. |22.74|
140
+ |IFEval (0-Shot) |38.25|
141
+ |BBH (3-Shot) |30.96|
142
+ |MATH Lvl 5 (4-Shot)|18.66|
143
+ |GPQA (0-shot) | 6.26|
144
+ |MuSR (0-shot) |13.31|
145
+ |MMLU-PRO (5-shot) |29.01|
146
 
147
 
148
  # Prompt Template
 
183
  tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.3-qwen2-7b")
184
  model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.3-qwen2-7b")
185
  ```
186
+ #