Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,7 @@ tags:
|
|
9 |
base_model:
|
10 |
- rhysjones/phi-2-orange
|
11 |
- cognitivecomputations/dolphin-2_6-phi-2
|
|
|
12 |
---
|
13 |
|
14 |
# Phiter
|
@@ -18,7 +19,19 @@ Phiter is a merge of the following models using [LazyMergekit](https://colab.res
|
|
18 |
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
|
19 |
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
<!-- prompt-template start -->
|
24 |
## Prompt template: ChatML
|
|
|
9 |
base_model:
|
10 |
- rhysjones/phi-2-orange
|
11 |
- cognitivecomputations/dolphin-2_6-phi-2
|
12 |
+
license: mit
|
13 |
---
|
14 |
|
15 |
# Phiter
|
|
|
19 |
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
|
20 |
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
|
21 |
|
22 |
+
Thanks to the great [Maxime Labonne](https://huggingface.co/mlabonne) we have evaluation results on [YALL](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
|
23 |
+
|
24 |
+
The model tops all other phi-2 finetunes on the leaderboard, even most MoE implementations like Phixtral(Date: 27th February 2024)
|
25 |
+
|
26 |
+
License: MIT
|
27 |
+
|
28 |
+
This model wouldn't have been possible without the support of:
|
29 |
+
|
30 |
+
[Maxime Labonne](https://huggingface.co/mlabonne) - he helped me troubleshoot the merge process
|
31 |
+
|
32 |
+
[brittlewis12](https://huggingface.co/brittlewis12) - helped me troubleshooting the creation of GGUF files
|
33 |
+
|
34 |
+
|
35 |
|
36 |
<!-- prompt-template start -->
|
37 |
## Prompt template: ChatML
|