Wajdi1976's picture
Update README.md
e88af18 verified
---
base_model: unsloth/qwen2.5-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
language:
- en
- ar
datasets:
- Yasbok/Alpaca_arabic_instruct
---
# Uploaded model
- **Developed by:** Wajdi1976
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-bnb-4bit
### First, Load the Model
```python
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "Wajdi1976/alpaca_arabic_Qwen2.5-3B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
```
### Second, Try the model
```python
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"استخدم البيانات المعطاة لحساب الوسيط.", # instruction
"[2 ، 3 ، 7 ، 8 ، 10]", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
```
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)