Mention v0.4 model ; Add Open LLM Leaderboard scores
Browse files
README.md
CHANGED
@@ -21,6 +21,10 @@ base_model:
|
|
21 |
- mlabonne/NeuralHermes-2.5-Mistral-7B
|
22 |
---
|
23 |
|
|
|
|
|
|
|
|
|
24 |
# Model Description
|
25 |
|
26 |
This is an update to [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) that removes
|
@@ -52,7 +56,25 @@ The 14 models are as follows:
|
|
52 |
|
53 |
- base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
54 |
|
55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
```yaml
|
58 |
models:
|
|
|
21 |
- mlabonne/NeuralHermes-2.5-Mistral-7B
|
22 |
---
|
23 |
|
24 |
+
# Update 2024-01-03
|
25 |
+
|
26 |
+
Check out our [v0.4 model](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.4) which is based on this and achieves better average score of 71.19 versus 69.66.
|
27 |
+
|
28 |
# Model Description
|
29 |
|
30 |
This is an update to [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) that removes
|
|
|
56 |
|
57 |
- base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
58 |
|
59 |
+
## Open LLM Leaderboard
|
60 |
+
|
61 |
+
| | v0.3 | v0.4 |
|
62 |
+
|------------|-------|-------|
|
63 |
+
| Average | 69.66 | 71.19 |
|
64 |
+
| ARC | 65.96 | 66.81 |
|
65 |
+
| HellaSwag | 85.29 | 86.15 |
|
66 |
+
| MMLU | 64.35 | 65.10 |
|
67 |
+
| TruthfulQA | 57.80 | 58.25 |
|
68 |
+
| Winogrande | 78.30 | 80.03 |
|
69 |
+
| GSM8K | 66.26 | 70.81 |
|
70 |
+
|
71 |
+
## Chat Template
|
72 |
+
|
73 |
+
We tried ChatML and Llama-2 chat template, but feel free to try other templates.
|
74 |
+
|
75 |
+
## Merge Configuration
|
76 |
+
|
77 |
+
The merge config file for this model is here:
|
78 |
|
79 |
```yaml
|
80 |
models:
|