Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,11 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
# **Introduction**
|
7 |
-
MoMo-
|
8 |
-
[MoMo-
|
9 |
Note that we did not exploit any form of weight merge.
|
10 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
11 |
-
MoMo-
|
12 |
|
13 |
|
14 |
## Details
|
@@ -38,8 +38,8 @@ MoMo-70B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https
|
|
38 |
import torch
|
39 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
40 |
|
41 |
-
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-
|
42 |
model = AutoModelForCausalLM.from_pretrained(
|
43 |
-
"moreh/MoMo-
|
44 |
)
|
45 |
```
|
|
|
4 |
- en
|
5 |
---
|
6 |
# **Introduction**
|
7 |
+
MoMo-72B-lora-1.8.6-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
|
8 |
+
[MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
9 |
Note that we did not exploit any form of weight merge.
|
10 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
11 |
+
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
12 |
|
13 |
|
14 |
## Details
|
|
|
38 |
import torch
|
39 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
40 |
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.6-DPO")
|
42 |
model = AutoModelForCausalLM.from_pretrained(
|
43 |
+
"moreh/MoMo-72B-lora-1.8.6-DPO"
|
44 |
)
|
45 |
```
|