File size: 3,674 Bytes
bb7f953 b7ecd67 f6e6495 b7ecd67 bb7f953 b7ecd67 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
license: llama3
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
- ko
tags:
- facebook
- meta
- llama
- llama-3
- llama-3-ko
---
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/646484cfb90150b2706df03b/BEOyMpnnY9VY2KXlc3V2F.png" width="20%"/>
<p>
# Llama-3-MAAL-8B-Instruct-v0.1
we release MAAL, Multilingual Adaptive Augmentation Language-model which comprises a groundbreaking fusion of multilingual capabilities and adaptive augmentation techniques.
- **Developed by:** [maum.ai Brain NLP](https://maum-ai.github.io). Jaeyoon Jung, Jinjoo Lee, Yongjae Lee, Dongjun Lee, Woosung Joo
- **Language(s) (NLP):** Korean, English (currently, bilingual)
## Model Description
Version 0.1 uses cross-lingual training to transfer instruction-following capabilities from English to Korean.
- We Trained this model on an 8 H100-80G for 1 day with cross-lingual training dataset
- we recommend using the fixed system prompt for the model unless you fine-tune it
```
๋๋ ๋ง์์์ด์์ด์ ์ฑ๋ด MAAL์ด๋ค. ๊ณ ๊ฐ์ ์ง๋ฌธ์ ์น์ ํ๊ฒ ๋ตํ์ฌ๋ผ.
```
## sample inference code (GPU)
```
import transformers
import torch
model_id = "maum-ai/Llama-3-MAAL-8B-Instruct-v0.1"
model = transformers.AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
streamer = transformers.TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# we recommend using the fixed prompt for the model unless you fine-tune it
prompt = "๋๋ ๋ง์์์ด์์ด์ ์ฑ๋ด MAAL์ด๋ค. ๊ณ ๊ฐ์ ์ง๋ฌธ์ ์น์ ํ๊ฒ ๋ตํ์ฌ๋ผ."
instruction = "์ฌ๊ณผ ํ ๋ฐ์ค์๋ ์ฌ๊ณผ๊ฐ 30๊ฐ ๋ค์ด์๋๋ฐ, ์ฒ์์๋ ์ฌ๊ณผ 3๋ฐ์ค๊ฐ ์์๊ณ , ๋ด๊ฐ ์ฌ๊ณผ 5๊ฐ๋ฅผ ๋จน์์ด. ๋จ์ ์ฌ๊ณผ๋ ์ด ๋ช๊ฐ์ผ?"
messages = [
{"role": "system", "content": f"{prompt}"},
{"role": "user", "content": f"{instruction}"}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors='pt').to("cuda")
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=1024, pad_token_id=tokenizer.eos_token_id)
```
## Evaluation Results
As the main goal of version 0.1 is to **transfer instruction-following capabilities from English to Korean** without utilizing continuous pre-training, etc., we select [**LogicKor**](https://github.com/StableFluffy/LogicKor) as our evaluation method to assess the Korean instruction skills.
We compare our model with a similar parameter model (less than 13B) that has been fine-tuned on the Korean dataset. \* denotes our self-report result.
|Model|single-turn(โ)|multi-turn(โ)|average(โ)|
|-|-|-|-|
|maum-ai/Llama-3-MAAL-8B-Instruct-v0.1*|**5.80**|4.66|**5.23**|
|maywell/Synatra-kiqu-10.7B|5.71|4.73|5.22|
|yanolja/EEVE-Korean-Instruct-10.8B-v1.0|5.78|3.92|4.85|
|nlpai-lab/KULLM3|4.61|**4.83**|4.72|
|MLP-KTLim/llama3-Bllossom*|2.11|1.57|1.84|
## Limitations
Due to this model being trained on a small dataset, it has several limitations.
- Hard to generate diverse Korean texts
- lack of Korean knowledge & Culture (localization)
- Not work with Image inputs and video inputs
## Todo
we will solve these limitations one by one by upgrading this model like as...
- Enhance the Korean generation through Vocabulary Expansion & Continuous pre-training. (more Korean corpus!)
- Localize with cultural adaptation method and additional Korean knowledge data. [*similar idea*](https://aclanthology.org/2023.emnlp-main.18/)
- Develop a Vision Language Model that can handle both video and image inputs. [*similar idea*](https://github.com/PKU-YuanGroup/Video-LLaVA) |