|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- mistral |
|
- trl |
|
- sft |
|
datasets: |
|
- merve/turkish_instructions |
|
--- |
|
|
|
- **Developed by:** notbdq |
|
- **License:** apache-2.0 |
|
|
|
- This model is a fine tuned mistral-7b-instruct-v0.2 with merve/turkish_instructions dataset. |
|
|
|
- Instruct format: |
|
```python |
|
"Aşağıda bir görevi tanımlayan bir talimat ve daha fazla bağlam sağlayan bir girdi bulunmaktadır. Talebi uygun şekilde tamamlayan bir yanıt yazın.\n\n### Talimat:\n{}\n\n### Girdi:\n{}\n\n### Yanıt:\n{}" |
|
``` |
|
|
|
- example inference code: |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
device = "cuda" # the device to load the model onto |
|
|
|
model = AutoModelForCausalLM.from_pretrained("notbdq/mistral-turkish-v2") |
|
tokenizer = AutoTokenizer.from_pretrained("notbdq/mistral-turkish-v2") |
|
|
|
messages = [ |
|
{"role": "user", "content": "Yapay zeka nasıl bulundu?"}, |
|
] |
|
|
|
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
|
|
|
model_inputs = encodeds.to(device) |
|
model.to(device) |
|
|
|
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
``` |