File size: 1,547 Bytes
5ae1a12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4577d50
5ae1a12
4577d50
 
 
5ae1a12
4577d50
5ae1a12
4577d50
 
5ae1a12
4577d50
 
 
5ae1a12
4577d50
 
 
 
5ae1a12
4577d50
 
5ae1a12
4577d50
5ae1a12
4577d50
5ae1a12
 
4577d50
 
c9c22f4
5ae1a12
4577d50
 
 
5ae1a12
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: cc-by-4.0
base_model: davidkim205/komt-solar-10.7b-sft-v5
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: nhn_dpo_v3_komt-solar-10.7b-sft-v5_DPO
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# ENERGY-DRINK-LOVE/eeve_dpo-v3

### Our Team
* Youjin Chung
* Jingyeom Kim

## Model

### Base Model
* [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5)

### Hardware and Software
* Hardware: A100 * 8 for training our model
* Deepspeed library & Huggingface TRL Trainer

### Dataset
* DPO_dataset
  * 자체 제작 dpo dataset(AI-hub dataset 활용)
  * OpenOrca DPO 등 영어 데이터셋 번역(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, 자체모델 활용)

### Training Method
* [DPO](https://arxiv.org/abs/2305.18290)

## Benchmark

**[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**


**[Ko-LLM-Leaderboard](https://www.aihub.or.kr/leaderboard/view.do?currMenu=500&topMenu=102)**
* (240316기준 4등)
* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6551c0e37bbfce18781a8748/xKS2X4hfrs100mpr4Jv89.png)

| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
|   61.20 |  57.51 |        70.33 |   53.34 |         68.49 |           56.32 |