--- license: cc-by-4.0 language: - ko --- # komt : korean multi task instruction tuning model ![multi task instruction tuning.jpg](https://github.com/davidkim205/komt/assets/16680469/c7f6ade7-247e-4b62-a94f-47e19abea68e) Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities. However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively. This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs). ## Model Details Edentns/DataVortexS-10.7B-dpo-v1.11 모델을 base로 komt 데이터셋으로 sft학습한 모델입니다. 현재 최종 완료버전의 모델은 아니며, 다양한 데이터셋으로 성능 튜닝중입니다. * **Model Developers** : davidkim(changyeon kim) * **Repository** : https://github.com/davidkim205/komt * **base mode** : Edentns/DataVortexS-10.7B-dpo-v1.11 * **dataset** : comp-341k