leejunhyeok
commited on
Commit
•
828fc93
1
Parent(s):
6b0c101
Update README.md
Browse files
README.md
CHANGED
@@ -7,13 +7,18 @@ metrics:
|
|
7 |
- accuracy
|
8 |
library_name: pytorch
|
9 |
---
|
|
|
|
|
|
|
|
|
|
|
10 |
# **Introduction**
|
11 |
MoMo-72B-lora-1.8.7-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
|
12 |
[MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
13 |
Note that we did not exploit any form of weight merge.
|
14 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
15 |
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
16 |
-
|
17 |
|
18 |
## Details
|
19 |
### Used Librarys
|
|
|
7 |
- accuracy
|
8 |
library_name: pytorch
|
9 |
---
|
10 |
+
|
11 |
+
# 24/04/05 update
|
12 |
+
We introduce [modelhub](https://model-hub.moreh.io/), an ai model host platform powered by AMD MI250 GPUs.
|
13 |
+
You can now test live-inference of this model at model hub.
|
14 |
+
|
15 |
# **Introduction**
|
16 |
MoMo-72B-lora-1.8.7-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
|
17 |
[MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
18 |
Note that we did not exploit any form of weight merge.
|
19 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
20 |
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
21 |
+
#
|
22 |
|
23 |
## Details
|
24 |
### Used Librarys
|