File size: 3,815 Bytes
d140d3e
 
5abbab0
 
 
 
 
 
 
 
 
 
 
 
 
 
d140d3e
 
 
5abbab0
d140d3e
5abbab0
d140d3e
 
 
 
 
5abbab0
d140d3e
5abbab0
 
 
 
 
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
 
 
 
5abbab0
d140d3e
 
5abbab0
d140d3e
5abbab0
 
 
 
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
d140d3e
5abbab0
8af8ab7
5abbab0
d140d3e
8af8ab7
50adbf3
8af8ab7
 
 
 
 
d140d3e
5abbab0
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
library_name: transformers
tags:
- language-model
- fine-tuned
- instruction-following
- PEFT
- LoRA
- BitsAndBytes
- Persian
- Farsi
- text-generation
datasets:
- taesiri/TinyStories-Farsi
model_name: LLaMA-3.1-8B-Persian-Instruct
pipeline_tag: text-generation
---


# LLaMA-3.1-8B-Persian-Instruct

This model is a fine-tuned version of the `meta-llama/Meta-Llama-3.1-8B-Instruct` model, specifically tailored for generating and understanding Persian text. The fine-tuning was conducted using the [TinyStories-Farsi](https://huggingface.co/datasets/taesiri/TinyStories-Farsi) dataset, which includes a diverse set of short stories in Persian. The primary goal of this fine-tuning was to enhance the model's performance in instruction-following tasks within the Persian language.

## Model Details

### Model Description

The `LLaMA-3.1-8B-Persian-Instruct` model is part of the LLaMA series known for its robust performance across various NLP tasks. This version is adapted to Persian, making it more effective for generating coherent and contextually relevant responses in this language.

- **Developed by:** Meta AI, fine-tuned by Amir Mohseni  
- **Model type:** Language Model  
- **Language(s) (NLP):** Persian (Farsi)  
- **License:** Apache 2.0  
- **Finetuned from model:** `meta-llama/Meta-Llama-3.1-8B-Instruct`  

### Model Sources

- **Repository:** [LLaMA-3.1-8B-Persian-Instruct on Hugging Face](https://huggingface.co/AmirMohseni/LLaMA-3.1-8B-Persian-Instruct)

## Training Details

### Training Data
The model was fine-tuned using the [TinyStories-Farsi](https://huggingface.co/datasets/taesiri/TinyStories-Farsi) dataset. This dataset provided a rich and diverse linguistic context, helping the model better understand and generate text in Persian.

### Training Procedure
The fine-tuning process was conducted using the following setup:

- **Epochs:** 4
- **Batch Size:** 8
- **Gradient Accumulation Steps:** 2
- **Hardware:** NVIDIA A100 GPU

### Fine-Tuning Strategy

To make the fine-tuning process efficient and effective, PEFT (Parameter-Efficient Fine-Tuning) techniques were employed. Specifically, the `BitsAndBytesConfig(load_in_4bit=True)` configuration was used, allowing the model to be fine-tuned in 4-bit precision. This approach significantly reduced the computational resources required while maintaining high performance, resulting in a training time of approximately 2 hours. The use of `BitsAndBytesConfig(load_in_4bit=True)` helped reduce the environmental impact by minimizing the computational resources required.

## Uses

### Direct Use

This model is well-suited for generating text in Persian, particularly for instruction-following tasks. It can be used in applications like chatbots, customer support systems, educational tools, and more where accurate and context-aware Persian language generation is needed.

### Out-of-Scope Use

The model is not intended for tasks requiring deep reasoning, complex multi-turn conversations, or contexts beyond the immediate prompt. It is also not designed for generating text in languages other than Persian.

## How to Get Started with the Model

Here is how you can use this model:

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = "meta-llama/Meta-Llama-3.1-8B-Instruct"
adapter_model = "AmirMohseni/Llama-3.1-8B-Instruct-Persian-finetuned-sft"

model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)

tokenizer = AutoTokenizer.from_pretrained(base_model)

# Example usage
prompt = "راه‌های تقویت حافظه چیست؟"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```