File size: 4,843 Bytes
1f92781 85cd7bf 1f92781 77c630a 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 7568f64 85cd7bf 7568f64 85cd7bf 7568f64 85cd7bf 7568f64 85cd7bf 7568f64 1f92781 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 1f92781 85cd7bf 14e4b46 85cd7bf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
base_model:
- meta-llama/Meta-Llama-3.2-3B
language:
- en
- ko
library_name: transformers
license: llama3.2
---
<a href="https://github.com/MLP-Lab/Bllossom">
<img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="30%" height="30%">
</a>
# Update!
* [2024.10.08] Bllossom-3B λͺ¨λΈμ΄ μ΅μ΄ μ
λ°μ΄νΈ λμμ΅λλ€.
# Bllossom | [Demo]() | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) |
```bash
μ ν¬ Bllossom νμμ Bllossom-3B λͺ¨λΈμ 곡κ°ν©λλ€.
llama3.2-3Bκ° λμλλ° νκ΅μ΄κ° ν¬ν¨ μλμλ€κ΅¬?? μ΄λ² Bllossom-3Bλ νκ΅μ΄κ° μ§μλμ§ μλ κΈ°λ³Έ λͺ¨λΈμ νκ΅μ΄-μμ΄λ‘ κ°νλͺ¨λΈμ
λλ€.
- 100% full-tuningμΌλ‘ 150GBμ μ μ λ νκ΅μ΄λ‘ μΆκ° μ¬μ νμ΅ λμμ΅λλ€. (GPUλ§μ΄ νμ μ΅λλ€)
- κ΅μ₯ν μ μ λ Instruction Tuningμ μ§ννμ΅λλ€.
- μμ΄ μ±λ₯μ μ ν μμμν€μ§ μμ μμ ν Bilingual λͺ¨λΈμ
λλ€.
- LogicKor κΈ°μ€ 5Bμ΄ν μ΅κ³ μ μλ₯Ό κΈ°λ‘νκ³ 6μ μ΄λ°λ μ μλ₯Ό 보μ
λλ€.
- Instruction tuningλ§ μ§ννμ΅λλ€. DPO λ± μ±λ₯ μ¬λ¦΄ λ°©λ²μΌλ‘ νλν΄λ³΄μΈμ.
- MT-Bench, LogicKor λ± λ²€μΉλ§ν¬ μ μλ₯Ό μλ°κΈ° μν΄ μ λ΅λ°μ΄ν°λ₯Ό νμ©νκ±°λ νΉμ λ²€μΉλ§ν¬λ₯Ό νκ²ν
ν΄μ νμ΅νμ§ μμμ΅λλ€. (ν΄λΉ λ²€μΉλ§ν¬ νκ²ν
ν΄μ νμ΅νλ©΄ 8μ λ λμ΅λλ€...)
μΈμ λ κ·Έλ¬λ― ν΄λΉ λͺ¨λΈμ μμ
μ μ΄μ©μ΄ κ°λ₯ν©λλ€.
1. Bllossomμ AAAI2024, NAACL2024, LREC-COLING2024 (ꡬλ) λ°νλμμ΅λλ€.
2. μ’μ μΈμ΄λͺ¨λΈ κ³μ μ
λ°μ΄νΈ νκ² μ΅λλ€!! νκ΅μ΄ κ°νλ₯Όμν΄ κ³΅λ μ°κ΅¬νμ€λΆ(νΉνλ
Όλ¬Έ) μΈμ λ νμν©λλ€!!
```
```python
from llama_cpp import Llama
from transformers import AutoTokenizer
model_id = 'Bllossom/llama-3.2-Korean-Bllossom-3B'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = Llama(
model_path='llama-3.2-Korean-Bllossom-3B-gguf-Q4_K_M.gguf'
)
instruction = "μ² μκ° 20κ°μ μ°νμ κ°μ§κ³ μμλλ° μν¬κ° μ λ°μ κ°μ Έκ°κ³ λ―Όμκ° λ¨μ 5κ°λ₯Ό κ°μ Έκ°μΌλ©΄ μ² μμκ² λ¨μ μ°νμ κ°―μλ λͺκ°μΈκ°μ?"
messages = [
{"role": "user", "content": f"{instruction}"}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt=True
)
generation_kwargs = {
"max_tokens":512,
"stop":["<|eot_id|>"],
"echo":True,
"top_p":0.9,
"temperature":0.6,
}
resonse_msg = model(prompt, **generation_kwargs)
print(resonse_msg['choices'][0]['text'][len(prompt):])
```
```
μ² μκ° 20κ°μ μ°νμ κ°μ§κ³ μμκ³ μν¬κ° μ λ°μ κ°μ Έκ°λ©΄, μν¬κ° κ°μ Έκ° μ°νμ κ°―μλ 20 / 2 = 10κ°μ
λλ€.
μ΄μ μ² μκ° λ¨μ μ°νμ κ°―μλ₯Ό κ³μ°ν΄λ³΄κ² μ΅λλ€. μν¬κ° 10κ°λ₯Ό κ°μ Έκ° ν μ² μκ° λ¨μ μ°νμ κ°―μλ 20 - 10 = 10κ°μ
λλ€.
λ―Όμκ° λ¨μ 5κ°λ₯Ό κ°μ Έκ°μΌλ―λ‘, μ² μκ° λ¨μ μ°νμ κ°―μλ 10 - 5 = 5κ°μ
λλ€.
λ°λΌμ μ² μκ° λ¨μ μ°νμ κ°―μλ 5κ°μ
λλ€.
```
## Supported by
- AICA <img src="https://aica-gj.kr/images/logo.png" width="20%" height="20%">
## Citation
**Language Model**
```text
@misc{bllossom,
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
year = {2024},
journal = {LREC-COLING 2024},
paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
},
}
```
**Vision-Language Model**
```text
@misc{bllossom-V,
author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
year = {2024},
publisher = {GitHub},
journal = {NAACL 2024 findings},
paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
},
}
```
## Contact
- μκ²½ν(KyungTae Lim), Professor at Seoultech. `[email protected]`
- ν¨μκ· (Younggyun Hahm), CEO of Teddysum. `[email protected]`
- κΉνμ(Hansaem Kim), Professor at Yonsei. `[email protected]`
## Contributor
- **μ νκ²°(Hangyeol Yoo)**, [email protected]
- μ λμ¬(Dongjae Shin), [email protected]
- μνμ(Hyeonseok Lim), [email protected]
- μμΈνΈ(Inho Won), [email protected]
- κΉλ―Όμ€(Minjun Kim), [email protected]
- μ‘μΉμ°(Seungwoo Song), [email protected]
- μ‘μ ν(Jeonghun Yuk), [email protected]
- μ΅μ°½μ(Chansu Choi), [email protected]
- μ‘μν(Seohyun Song), [email protected] |