Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
# **Introduction**
|
7 |
-
MoMo-
|
8 |
[MoMo-70B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-70B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
9 |
Note that we did not exploit any form of weight merge.
|
10 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
@@ -25,7 +25,7 @@ MoMo-70B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https
|
|
25 |
|
26 |
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|
27 |
|------------------------------|-------|-------|-------|-------|
|
28 |
-
| **V1.8.
|
29 |
### Used Environments
|
30 |
- AMD MI250 & MoAI platform
|
31 |
- Please visit https://moreh.io/product for more information about MoAI platform
|
@@ -38,8 +38,8 @@ MoMo-70B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https
|
|
38 |
import torch
|
39 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
40 |
|
41 |
-
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-
|
42 |
model = AutoModelForCausalLM.from_pretrained(
|
43 |
-
"moreh/MoMo-
|
44 |
)
|
45 |
```
|
|
|
4 |
- en
|
5 |
---
|
6 |
# **Introduction**
|
7 |
+
MoMo-72B-lora-1.8.7-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-70B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-70B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
|
8 |
[MoMo-70B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-70B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
9 |
Note that we did not exploit any form of weight merge.
|
10 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
|
|
25 |
|
26 |
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|
27 |
|------------------------------|-------|-------|-------|-------|
|
28 |
+
| **V1.8.7(result < 0.1, %)**| TBU |TBU | TBU | TBU |
|
29 |
### Used Environments
|
30 |
- AMD MI250 & MoAI platform
|
31 |
- Please visit https://moreh.io/product for more information about MoAI platform
|
|
|
38 |
import torch
|
39 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
40 |
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.7-DPO")
|
42 |
model = AutoModelForCausalLM.from_pretrained(
|
43 |
+
"moreh/MoMo-72B-lora-1.8.7-DPO"
|
44 |
)
|
45 |
```
|