moreh-sungmin exzread commited on
Commit
6856c07
1 Parent(s): 8a538ac

Update README.md (#7)

Browse files

- Update README.md (16086583564aa093f0bd28d658e633ed943fc490)


Co-authored-by: wit <[email protected]>

Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -2,6 +2,18 @@
2
  license: mit
3
  language:
4
  - en
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
  # **Introduction**
7
  MoMo-72B-lora-1.8.7-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
 
2
  license: mit
3
  language:
4
  - en
5
+ - id
6
+ datasets:
7
+ - Ichsan2895/alpaca-gpt4-indonesian
8
+ metrics:
9
+ - accuracy
10
+ - character
11
+ library_name: keras
12
+ pipeline_tag: text-generation
13
+ tags:
14
+ - code
15
+ - biology
16
+ - finance
17
  ---
18
  # **Introduction**
19
  MoMo-72B-lora-1.8.7-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.