|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- MarkrAI/KOpen-HQ-Hermes-2.5-60K |
|
language: |
|
- ko |
|
base_model: |
|
- meta-llama/Meta-Llama-3.1-8B-Instruct |
|
pipeline_tag: text-generation |
|
library_name: adapter-transformers |
|
--- |
|
Licensed under the Apache License, Version 2.0 (the "License"); |
|
you may not use this file except in compliance with the License. |
|
You may obtain a copy of the License at |
|
http://www.apache.org/licenses/LICENSE-2.0 |
|
|
|
Unless required by applicable law or agreed to in writing, software |
|
distributed under the License is distributed on an "AS IS" BASIS, |
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
|
See the License for the specific language governing permissions and |
|
limitations under the License. |
|
|
|
unsloth๋ฅผ ์ฌ์ฉํ์ฌ meta-llama/Meta-Llama-3.1-8B-Instruct ๋ชจ๋ธ์ LORA ํ์ธํ๋์ ์๋ฃํ์ต๋๋ค. |
|
|
|
MarkrAI/KOpen-HQ-Hermes-2.5-60k ๋ฐ์ดํฐ๋ฅผ ํ์ต์์ผฐ์ต๋๋ค. |
|
|
|
## How to use |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("NAPS-ai/naps-llama-3.1-8b-instruct-v0.4") |
|
model = AutoModelForCausalLM.from_pretrained("NAPS-ai/naps-llama-3.1-8b-instruct-v0.4") |
|
``` |
|
|
|
## Chatbot |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6699b80354725cd6e0ae8e19/1J506GxR0eT6XnKsGVbye.png) |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import transformers |
|
import torch |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("NAPS-ai/naps-llama-3.1-8b-instruct-v0.4") |
|
model = AutoModelForCausalLM.from_pretrained("NAPS-ai/naps-llama-3.1-8b-instruct-v0.4") |
|
|
|
|
|
|
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device=0, |
|
) |
|
|
|
def answering(question): |
|
messages = [ |
|
{"role": "system", "content": "๋น์ ์ ํญ์ ์น์ ํ๊ฒ ๋๋ตํ๋ ์๋ด์์
๋๋ค."}, |
|
{"role": "user", "content": question}, |
|
] |
|
outputs = pipeline( |
|
messages, |
|
max_new_tokens=1024, |
|
pad_token_id = pipeline.tokenizer.eos_token_id |
|
) |
|
return outputs[0]["generated_text"][2]['content'] |
|
|
|
|
|
|
|
|
|
while True: |
|
question = input("์ง๋ฌธ์ ์
๋ ฅํ์ธ์ : ") |
|
if question == "์ข
๋ฃ": |
|
print("ํ๋ก๊ทธ๋จ ์ข
๋ฃ") |
|
break |
|
answer = answering(question) |
|
print(f"AI์ ๋ต๋ณ: {answer}") |
|
|
|
``` |
|
|
|
|
|
Contact : [email protected] |