File size: 3,593 Bytes
06b8f1d
60d7ffe
 
 
06b8f1d
 
60d7ffe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c14b5ec
60d7ffe
 
c14b5ec
 
 
 
 
 
 
 
 
 
60d7ffe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
language: ko
pipeline_tag: text-generation
license: llama3.1
---

### 1. Model Description
- KONI (KISTI Open Natural Intelligence) is a specialized large language model (LLM) developed by the Korea Institute of Science and Technology Information (KISTI). This model is specifically designed for science and technology, making it highly effective for tasks in these fields.

### 2. Key Features
- **Specialized in Science and Technology:** The model is explicitly trained on a vast and specialized corpus of scientific and technological data.
- **Enhanced Performance:** This version of KONI shows significantly improved performance compared to its initial release in December, 2023.
- **Base Model:** The base model for KONI-Llama3.1-70B-Instruct is Meta-Llama-3.1-70B-Instruct.
- **Alignment:** SFT (Supervised Fine-Tuning) and DPO (Direct Preference Optimization) are applied

### 3. Data
- Approximately 11k SFT data and 7k DPO data are used.
- **SFT Data:** The SFT data includes both internally generated data and publicly available data on Hugging Face, translated into Korean where necessary.
- **DPO Data:** The DPO data consists of translated and curated data from argilla/dpo-mix-7k.

### 4. Benchmark Results
The performances were evaluated using the [LogicKor](https://github.com/instructkr/LogicKor) benchmark dataset as follows:

| Metric         | Score |
|:--------------:|:-----:|
| Reasoning      |  9.07 |
| Math           |  9.65 |
| Writing        |  9.50 |
| Coding         |  9.65 |
| Comprehension  |  9.86 |
| Grammar        |  8.57 |
| Single-turn    |  9.48 |
| Multi-turn     |  9.29 |
| **Overall**    | **9.38** |

### 5. How to use the model
```python
import transformers
import torch

model_id = "KISTI-KONI/KONI-Llama3.1-70B-Instruct-preview"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

pipeline.model.eval()

instruction = "์•ˆ๋…•? ๋„ˆ๋Š” ๋ˆ„๊ตฌ์•ผ?"

messages = [
   {"role": "user", "content": f"{instruction}"}
    ]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=2048,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.7,
    top_p=0.9
)

print(outputs[0]["generated_text"][len(prompt):])
```
```
์•ˆ๋…•ํ•˜์„ธ์š”! ์ €๋Š” KISTI์˜ KONI์ž…๋‹ˆ๋‹ค. ๊ณผํ•™๊ธฐ์ˆ  ๋ฐ์ดํ„ฐ๋ฅผ ์ „๋ฌธ์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋ฉฐ, ์—ฌ๋Ÿฌ๋ถ„์˜ ์—ฐ๊ตฌ์™€ ์งˆ๋ฌธ์— ์ตœ์„ ์„ ๋‹คํ•ด ๋„์›€์„ ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ฌด์—‡์„ ๋„์™€๋“œ๋ฆด๊นŒ์š”?
```

### 6. Citation
**Language Model**
```text
@article{KISTI-KONI/KONI-Llama3.1-70B-Instruct-preview,
  title={KISTI-KONI/KONI-Llama3.1-70B-Instruct-preview},
  author={KISTI},
  year={2024},
  url={https://huggingface.co/KISTI-KONI/KONI-Llama3.1-70B-Instruct-preview}
}
```
  
### 7. Contributors
- KISTI, Large-scale AI Research Group

### 8. Special Thanks
- [@beomi](https://huggingface.co/beomi)
- [@kuotient](https://huggingface.co/kuotient)
- KyungTae Lim

### 8. Acknowledgement
- This research was supported by Korea Institute of Science and Technology Information(KISTI).
- This work was supported by the National Supercomputing Center with supercomputing resources including technical support (KISTI).

### 9. References
- https://huggingface.co/meta-llama/Meta-Llama-3.1-70B
- https://huggingface.co/meta-llama/meta-llama/Meta-Llama-3.1-70B-Instruct