komt_DPOv3 / README.md
jingyeom's picture
Update README.md
c9c22f4 verified
|
raw
history blame
1.55 kB
metadata
license: cc-by-4.0
base_model: davidkim205/komt-solar-10.7b-sft-v5
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: nhn_dpo_v3_komt-solar-10.7b-sft-v5_DPO
    results: []

ENERGY-DRINK-LOVE/eeve_dpo-v3

Our Team

  • Youjin Chung
  • Jingyeom Kim

Model

Base Model

Hardware and Software

  • Hardware: A100 * 8 for training our model
  • Deepspeed library & Huggingface TRL Trainer

Dataset

  • DPO_dataset
    • ์ž์ฒด ์ œ์ž‘ dpo dataset(AI-hub dataset ํ™œ์šฉ)
    • OpenOrca DPO ๋“ฑ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹ ๋ฒˆ์—ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์ž์ฒด๋ชจ๋ธ ํ™œ์šฉ)

Training Method

Benchmark

Ko LM Eval Harness

Ko-LLM-Leaderboard

  • (240316๊ธฐ์ค€ 4๋“ฑ)
  • image/png
Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
61.20 57.51 70.33 53.34 68.49 56.32