Update README.md
Browse files
README.md
CHANGED
@@ -10,3 +10,8 @@ base_model:
|
|
10 |
|
11 |
The goal of this project is to adapt large language models for the Arabic language. Due to the scarcity of Arabic instruction fine-tuning data, the focus is on creating a high-quality instruction fine-tuning (IFT) dataset. The project aims to finetune models on this dataset and evaluate their performance across various benchmarks.
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
The goal of this project is to adapt large language models for the Arabic language. Due to the scarcity of Arabic instruction fine-tuning data, the focus is on creating a high-quality instruction fine-tuning (IFT) dataset. The project aims to finetune models on this dataset and evaluate their performance across various benchmarks.
|
12 |
|
13 |
+
This model is the 2B version. It was trained for 2 days on 1 A100 GPU using LoRA with a rank of 128, a learning rate of 1e-4, and a cosine learning rate schedule.
|
14 |
+
|
15 |
+
| Model | Average ⬆️ | ACVA | AlGhafa | MMLU | EXAMS | ARC Challenge | ARC Easy | BOOLQ | COPA | HELLAWSWAG | OPENBOOK QA | PIQA | RACE | SCIQ | TOXIGEN |
|
16 |
+
|---------------------|------------|-------|---------|-------|-------|---------------|----------|-------|-------|-------------|-------------|-------|-------|-------|---------|
|
17 |
+
| Slim205/Barka-2b-it | 46.98 | 39.5 | 46.5 | 37.06 | 38.73 | 35.78 | 36.97 | 73.77 | 50 | 28.98 | 43.84 | 56.36 | 36.19 | 55.78 | 78.29 |
|