AIDXteam's picture
Upload LlamaForCausalLM
bb8f8c4 verified
|
raw
history blame
9.59 kB
---
base_model:
- openchat/openchat_3.5
language:
- ko
- en
library_name: adapter-transformers
license: mit
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- finance
- biology
- legal
- art
- text-generation-inference
---
### โ›ฑ ktdsbaseLM v0.11์€ openchat3.5๋ฅผ Foundation ๋ชจ๋ธ๋กœ ํ•˜๋Š” ํ•œ๊ตญ์–ด ๋ฐ ํ•œ๊ตญ์˜ ๋‹ค์–‘ํ•œ
### ๋ฌธํ™”์— ์ ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ธฐ ์œ„ํ•ด
### ๊ฐœ๋ฐœ ๋˜์—ˆ์œผ๋ฉฐ ์ž์ฒด ์ œ์ž‘ํ•œ 53์˜์—ญ์˜ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ•œ๊ตญ ์‚ฌํšŒ ๊ฐ€์น˜์™€
### ๋ฌธํ™”๋ฅผ ์ดํ•ดํ•˜๋Š” ๋ชจ๋ธ ์ž…๋‹ˆ๋‹ค. โœŒ
# โถ ๋ชจ๋ธ ์„ค๋ช…
- ๋ชจ๋ธ๋ช… ๋ฐ ์ฃผ์š”๊ธฐ๋Šฅ:
KTDSbaseLM v0.11์€ OpenChat 3.5 ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ SFT ๋ฐฉ์‹์œผ๋กœ ํŒŒ์ธํŠœ๋‹๋œ Mistral 7B / openchat3.5 ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
ํ•œ๊ตญ์–ด์™€ ํ•œ๊ตญ์˜ ๋‹ค์–‘ํ•œ ๋ฌธํ™”์  ๋งฅ๋ฝ์„ ์ดํ•ดํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ์œผ๋ฉฐ โœจโœจ, ์ž์ฒด ์ œ์ž‘ํ•œ 135๊ฐœ ์˜์—ญ์˜ ํ•œ๊ตญ์–ด
๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•ด ํ•œ๊ตญ ์‚ฌํšŒ์˜ ๊ฐ€์น˜์™€ ๋ฌธํ™”๋ฅผ ๋ฐ˜์˜ํ•ฉ๋‹ˆ๋‹ค.
์ฃผ์š” ๊ธฐ๋Šฅ์œผ๋กœ๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑ, ๋Œ€ํ™” ์ถ”๋ก , ๋ฌธ์„œ ์š”์•ฝ, ์งˆ์˜์‘๋‹ต, ๊ฐ์ • ๋ถ„์„ ๋ฐ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ๊ด€๋ จ ๋‹ค์–‘ํ•œ ์ž‘์—…์„ ์ง€์›ํ•˜๋ฉฐ,
ํ™œ์šฉ ๋ถ„์•ผ๋Š” ๋ฒ•๋ฅ , ์žฌ๋ฌด, ๊ณผํ•™, ๊ต์œก, ๋น„์ฆˆ๋‹ˆ์Šค, ๋ฌธํ™” ์—ฐ๊ตฌ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ์‘์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
- ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜: KTDSBaseLM v0.11์€ Mistral 7B ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ, ํŒŒ๋ผ๋ฏธํ„ฐ ์ˆ˜๋Š” 70์–ต ๊ฐœ(7B)๋กœ ๊ตฌ์„ฑ๋œ ๊ณ ์„ฑ๋Šฅ ์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
์ด ๋ชจ๋ธ์€ OpenChat 3.5๋ฅผ ํŒŒ์šด๋ฐ์ด์…˜ ๋ชจ๋ธ๋กœ ์‚ผ์•„, SFT(์ง€๋„ ๋ฏธ์„ธ ์กฐ์ •) ๋ฐฉ์‹์„ ํ†ตํ•ด ํ•œ๊ตญ์–ด์™€ ํ•œ๊ตญ ๋ฌธํ™”์— ํŠนํ™”๋œ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•˜๋„๋ก ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
Mistral 7B์˜ ๊ฒฝ๋Ÿ‰ํ™”๋œ ๊ตฌ์กฐ๋Š” ๋น ๋ฅธ ์ถ”๋ก  ์†๋„์™€ ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์„ฑ์„ ๋ณด์žฅํ•˜๋ฉฐ, ๋‹ค์–‘ํ•œ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ์ž‘์—…์— ์ ํ•ฉํ•˜๊ฒŒ ์ตœ์ ํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.
์ด ์•„ํ‚คํ…์ฒ˜๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑ, ์งˆ์˜์‘๋‹ต, ๋ฌธ์„œ ์š”์•ฝ, ๊ฐ์ • ๋ถ„์„๊ณผ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ์ž‘์—…์—์„œ ํƒ์›”ํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.
# โท ํ•™์Šต ๋ฐ์ดํ„ฐ
- ktdsbaseLM v0.11์€ ์ž์ฒด ๊ฐœ๋ฐœํ•œ ์ด 3.6GB ํฌ๊ธฐ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋‘ 233๋งŒ ๊ฑด์˜ QnA, ์š”์•ฝ, ๋ถ„๋ฅ˜ ๋“ฑ ๋ฐ์ดํ„ฐ๋ฅผ ํฌํ•จํ•˜๋ฉฐ,
๊ทธ ์ค‘ 133๋งŒ ๊ฑด์€ 53๊ฐœ ์˜์—ญ์˜ ๊ฐ๊ด€์‹ ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜์—ญ์—๋Š” ํ•œ๊ตญ์‚ฌ, ์‚ฌํšŒ, ์žฌ๋ฌด, ๋ฒ•๋ฅ , ์„ธ๋ฌด, ์ˆ˜ํ•™, ์ƒ๋ฌผ, ๋ฌผ๋ฆฌ, ํ™”ํ•™ ๋“ฑ์ด ํฌํ•จ๋˜๋ฉฐ,
Chain of Thought ๋ฐฉ์‹์œผ๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ 130๋งŒ ๊ฑด์˜ ์ฃผ๊ด€์‹ ๋ฌธ์ œ๋Š” ํ•œ๊ตญ์‚ฌ, ์žฌ๋ฌด, ๋ฒ•๋ฅ , ์„ธ๋ฌด, ์ˆ˜ํ•™ ๋“ฑ 38๊ฐœ ์˜์—ญ์— ๊ฑธ์ณ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
ํ•™์Šต ๋ฐ์ดํ„ฐ ์ค‘ ํ•œ๊ตญ์˜ ์‚ฌํšŒ ๊ฐ€์น˜์™€ ์ธ๊ฐ„์˜ ๊ฐ์ •์„ ์ดํ•ดํ•˜๊ณ  ์ง€์‹œํ•œ ์‚ฌํ•ญ์— ๋”ฐ๋ผ ์ถœ๋ ฅํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šตํ•˜์˜€์Šต๋‹ˆ๋‹ค.
- ํ•™์Šต Instruction Datasets Format:
<pre><code>{"prompt": "prompt text", "completion": "ideal generated text"}</code></pre>
# โธ ์‚ฌ์šฉ ์‚ฌ๋ก€
ktdsbaseLM v0.11์€ ๋‹ค์–‘ํ•œ ์‘์šฉ ๋ถ„์•ผ์—์„œ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด:
- ๊ต์œก ๋ถ„์•ผ: ์—ญ์‚ฌ, ์ˆ˜ํ•™, ๊ณผํ•™ ๋“ฑ ๋‹ค์–‘ํ•œ ํ•™์Šต ์ž๋ฃŒ์— ๋Œ€ํ•œ ์งˆ์˜์‘๋‹ต ๋ฐ ์„ค๋ช… ์ƒ์„ฑ.
- ๋น„์ฆˆ๋‹ˆ์Šค: ๋ฒ•๋ฅ , ์žฌ๋ฌด, ์„ธ๋ฌด ๊ด€๋ จ ์งˆ์˜์— ๋Œ€ํ•œ ๋‹ต๋ณ€ ์ œ๊ณต ๋ฐ ๋ฌธ์„œ ์š”์•ฝ.
- ์—ฐ๊ตฌ ๋ฐ ๋ฌธํ™”: ํ•œ๊ตญ ์‚ฌํšŒ์™€ ๋ฌธํ™”์— ๋งž์ถ˜ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ์ž‘์—…, ๊ฐ์ • ๋ถ„์„, ๋ฌธ์„œ ์ƒ์„ฑ ๋ฐ ๋ฒˆ์—ญ.
- ๊ณ ๊ฐ ์„œ๋น„์Šค: ์‚ฌ์šฉ์ž์™€์˜ ๋Œ€ํ™” ์ƒ์„ฑ ๋ฐ ๋งž์ถคํ˜• ์‘๋‹ต ์ œ๊ณต.
- ์ด ๋ชจ๋ธ์€ ๋‹ค์–‘ํ•œ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ์ž‘์—…์—์„œ ๋†’์€ ํ™œ์šฉ๋„๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค.
# โน ํ•œ๊ณ„ โ›ˆโ›ˆ
- ktdsBaseLM v0.11์€ ํ•œ๊ตญ์–ด์™€ ํ•œ๊ตญ ๋ฌธํ™”์— ํŠนํ™”๋˜์–ด ์žˆ์œผ๋‚˜,
ํŠน์ • ์˜์—ญ(์˜ˆ: ์ตœ์‹  ๊ตญ์ œ ์ž๋ฃŒ, ์ „๋ฌธ ๋ถ„์•ผ)์˜ ๋ฐ์ดํ„ฐ ๋ถ€์กฑ์œผ๋กœ ์ธํ•ด ๋‹ค๋ฅธ ์–ธ์–ด ๋˜๋Š”
๋ฌธํ™”์— ๋Œ€ํ•œ ์‘๋‹ต์˜ ์ •ํ™•์„ฑ์ด ๋–จ์–ด์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
๋˜ํ•œ, ๋ณต์žกํ•œ ๋…ผ๋ฆฌ์  ์‚ฌ๊ณ ๋ฅผ ์š”๊ตฌํ•˜๋Š” ๋ฌธ์ œ์— ๋Œ€ํ•ด ์ œํ•œ๋œ ์ถ”๋ก  ๋Šฅ๋ ฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ,
ํŽธํ–ฅ๋œ ๋ฐ์ดํ„ฐ๊ฐ€ ํฌํ•จ๋  ๊ฒฝ์šฐ ํŽธํ–ฅ๋œ ์‘๋‹ต์ด ์ƒ์„ฑ๋  ๊ฐ€๋Šฅ์„ฑ๋„ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค.
# โบ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•
<pre><code>
import os
import os.path as osp
import sys
import fire
import json
from typing import List, Union
import pandas as pd
import torch
from torch.nn import functional as F
import transformers
from transformers import TrainerCallback, TrainingArguments, TrainerState, TrainerControl, BitsAndBytesConfig
from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR
from transformers import LlamaForCausalLM, LlamaTokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer
from datasets import load_dataset
from peft import (
LoraConfig,
get_peft_model,
set_peft_model_state_dict
)
from peft import PeftModel
import re
import ast
device = 'auto' #@param {type: "string"}
model = '' #@param {type: "string"}
model = AutoModelForCausalLM.from_pretrained(
model,
quantization_config=bnb_config,
#load_in_4bit=True, # Quantization Load
device_map=device)
tokenizer = AutoTokenizer.from_pretrained(base_LLM_model)
input_text = "์•ˆ๋…•ํ•˜์„ธ์š”."
inputs = tokenizer(input_text, return_tensors="pt")
inputs = inputs.to("cuda:0")
with torch.no_grad():
outputs = model.generate(**inputs, max_length=1024)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
</code></pre>
## โœ… ktds๋Š” openchat ์™ธ์—๋„ LlaMA, Polyglot, EEVE ๋“ฑ ๋Œ€ํ‘œ์ ์ธ LLM์— ๋‹ค์–‘ํ•œ ์˜์—ญ์˜ ํ•œ๊ตญ์˜ ๋ฌธํ™”์™€ ์ง€์‹์„ ํŒŒ์ธํŠœ๋‹ํ•œ LLM์„ ์ œ๊ณตํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค.
---
Hereโ€™s the English version of the provided text:
# โถ Model Description
**Model Name and Key Features**:
KTDSbaseLM v0.11 is based on the OpenChat 3.5 model, fine-tuned using the SFT method on the Mistral 7B model.
It is designed to understand Korean and various cultural contexts, utilizing data from 135 domains in Korean society.
The model supports tasks such as text generation, conversation inference, document summarization,
question answering, sentiment analysis, and other NLP tasks.
Its applications span fields like law, finance, science, education, business, and cultural research.
**Model Architecture**:
KTDSBaseLM v0.11 is a high-performance language model with 7 billion parameters based on the Mistral 7B model.
It uses OpenChat 3.5 as the foundation and is fine-tuned using SFT to excel in Korean language and culture.
The streamlined Mistral 7B architecture ensures fast inference and memory efficiency,
optimized for various NLP tasks like text generation, question answering, document summarization, and sentiment analysis.
---
# โท Training Data
KTDSbaseLM v0.11 was trained on 3.6GB of data, comprising 2.33 million Q&A instances.
This includes 1.33 million multiple-choice questions across 53 domains such as history,
finance, law, tax, and science, trained with the Chain of Thought method. Additionally,
1.3 million short-answer questions cover 38 domains including history, finance, and law.
**Training Instruction Dataset Format**:
`{"prompt": "prompt text", "completion": "ideal generated text"}`
---
# โธ Use Cases
KTDSbaseLM v0.11 can be used across multiple fields, such as:
- **Education**: Answering questions and generating explanations for subjects like history, math, and science.
- **Business**: Providing responses and summaries for legal, financial, and tax-related queries.
- **Research and Culture**: Performing NLP tasks, sentiment analysis, document generation, and translation.
- **Customer Service**: Generating conversations and personalized responses for users.
This model is highly versatile in various NLP tasks.
---
# โน Limitations
KTDSBaseLM v0.11 is specialized in Korean language and culture.
However, it may lack accuracy in responding to topics outside its scope,
such as international or specialized data.
Additionally, it may have limited reasoning ability for complex logical problems and
may produce biased responses if trained on biased data.
---
# โบ Usage Instructions
<pre><code>
import os
import os.path as osp
import sys
import fire
import json
from typing import List, Union
import pandas as pd
import torch
from torch.nn import functional as F
import transformers
from transformers import TrainerCallback, TrainingArguments, TrainerState, TrainerControl, BitsAndBytesConfig
from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR
from transformers import LlamaForCausalLM, LlamaTokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer
from datasets import load_dataset
from peft import (
LoraConfig,
get_peft_model,
set_peft_model_state_dict
)
from peft import PeftModel
import re
import ast
device = 'auto' #@param {type: "string"}
model = '' #@param {type: "string"}
model = AutoModelForCausalLM.from_pretrained(
model,
quantization_config=bnb_config,
#load_in_4bit=True, # Quantization Load
device_map=device)
tokenizer = AutoTokenizer.from_pretrained(base_LLM_model)
input_text = "์•ˆ๋…•ํ•˜์„ธ์š”."
inputs = tokenizer(input_text, return_tensors="pt")
inputs = inputs.to("cuda:0")
with torch.no_grad():
outputs = model.generate(**inputs, max_length=1024)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
</code></pre>
## KTDS plans to provide fine-tuned LLMs (Large Language Models) across various domains of Korean culture and knowledge,
## including models based on not only OpenChat but also LLaMA, Polyglot, and EEVE.
## These models will be tailored to better understand and generate content specific to Korean contexts.