|
--- |
|
language: |
|
- ko |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
**Model Developers** HyunseokLee, TaeyoungKim - (kaist alinlab, omnious.ai) |
|
|
|
**Input** Models input text only. |
|
|
|
**Output** Models generate text only. |
|
|
|
**Model Architecture** |
|
ko-en-llama2-13b is an auto-regressive language model based on the LLaMA2 transformer architecture. |
|
|
|
**Base Model** |
|
Llama-2-13B |
|
|
|
**Training Dataset** |
|
Open dataset wiki and AIhub (English + Korean). |
|
|
|
**Training Objective** |
|
We trained the model to learn Korean corpus while maintaining Llama's English ability. |
|
(still training) |