Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,86 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- ko
|
4 |
+
datasets:
|
5 |
+
- kyujinpy/KoCoT_2000
|
6 |
+
library_name: transformers
|
7 |
+
pipeline_tag: text-generation
|
8 |
license: cc-by-nc-4.0
|
9 |
---
|
10 |
+
|
11 |
+
# **KoT-platypus2**
|
12 |
+
![img](./KoT-platypus2.png)
|
13 |
+
**CoT + KO-platypus2 = KoT-platypus2**
|
14 |
+
|
15 |
+
## Model Details
|
16 |
+
|
17 |
+
**Model Developers** Kyujin Han (kyujinpy)
|
18 |
+
|
19 |
+
**Input** Models input text only.
|
20 |
+
|
21 |
+
**Output** Models generate text only.
|
22 |
+
|
23 |
+
**Model Architecture**
|
24 |
+
KoT-platypus2-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
|
25 |
+
|
26 |
+
**Repo Link**
|
27 |
+
Github KoT-platypus: [KoT-platypus2](https://github.com/KyujinHan/KoT-platypus)
|
28 |
+
|
29 |
+
**Base Model**
|
30 |
+
[KO-Platypus2-7B-ex](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex)
|
31 |
+
More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2)
|
32 |
+
More detail repo(Github): [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus)
|
33 |
+
|
34 |
+
**Training Dataset**
|
35 |
+
I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000).
|
36 |
+
Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
|
37 |
+
|
38 |
+
I use A100 GPU 40GB and COLAB, when trianing.
|
39 |
+
|
40 |
+
**Training Hyperparameters**
|
41 |
+
| Hyperparameters | Value |
|
42 |
+
| --- | --- |
|
43 |
+
| batch_size | `64` |
|
44 |
+
| micro_batch_size | `1` |
|
45 |
+
| Epochs | `15` |
|
46 |
+
| learning_rate | `1e-5` |
|
47 |
+
| cutoff_len | `4096` |
|
48 |
+
| lr_scheduler | `linear` |
|
49 |
+
| base_model | `kyujinpy/KO-Platypus2-13B` |
|
50 |
+
|
51 |
+
|
52 |
+
# **Model Benchmark**
|
53 |
+
|
54 |
+
## KO-LLM leaderboard
|
55 |
+
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
|
56 |
+
|
57 |
+
![img](./leaderboard.png)
|
58 |
+
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
|
59 |
+
| --- | --- | --- | --- | --- | --- | --- |
|
60 |
+
| KoT-Platypus2-13B(ours) | NaN | NaN | NaN | NaN | NaN | NaN |
|
61 |
+
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
|
62 |
+
| [CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
|
63 |
+
| [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 |
|
64 |
+
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 45.62 | 38.05 | 49.63 | 34.68 | 37.69 | 68.08 |
|
65 |
+
> Compare with Top 4 SOTA models. (update: 10/05)
|
66 |
+
|
67 |
+
|
68 |
+
# Implementation Code
|
69 |
+
```python
|
70 |
+
### KO-Platypus
|
71 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
72 |
+
import torch
|
73 |
+
|
74 |
+
repo = "kyujinpy/KoT-platypus2-13B"
|
75 |
+
CoT-llama = AutoModelForCausalLM.from_pretrained(
|
76 |
+
repo,
|
77 |
+
return_dict=True,
|
78 |
+
torch_dtype=torch.float16,
|
79 |
+
device_map='auto'
|
80 |
+
)
|
81 |
+
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
|
82 |
+
```
|
83 |
+
|
84 |
+
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
|
85 |
+
|
86 |
+
---
|