File size: 1,802 Bytes
04fac8a f442d23 04fac8a 5906e92 8763ee1 3224a19 5906e92 8763ee1 a42e5ad 15cdc8a 5906e92 a42e5ad 51b658f a42e5ad 72db123 15cdc8a f442d23 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: mit
datasets:
- Slim205/Barka_data_2B
language:
- ar
base_model:
- google/gemma-2-2b-it
---
![Alt text](photo.png)
Welcome to Slim205/Barka-2b-it : The best 2B Arabic LLM. Feel free to use it and give me feedbacks on it.
# Motivation :
The goal of the project was to adapt large language models for the Arabic language and create a new state-of-the-art Arabic LLM. Due to the scarcity of Arabic instruction fine-tuning data, not many LLMs have been trained specifically in Arabic, which is surprising given the large number of Arabic speakers.
Our final model was trained on a high-quality instruction fine-tuning (IFT) dataset, generated synthetically and then evaluated using the Hugging Face Arabic leaderboard.
# Training :
This model is the 2B version. It was trained for 2 days on 1 A100 GPU using LoRA with a rank of 128, a learning rate of 1e-4, and a cosine learning rate schedule.
# Evaluation :
| Metric | Slim205/Barka-2b-it |
|----------------------|---------------------|
| Average | 46.98 |
| ACVA | 39.5 |
| AlGhafa | 46.5 |
| MMLU | 37.06 |
| EXAMS | 38.73 |
| ARC Challenge | 35.78 |
| ARC Easy | 36.97 |
| BOOLQ | 73.77 |
| COPA | 50 |
| HELLAWSWAG | 28.98 |
| OPENBOOK QA | 43.84 |
| PIQA | 56.36 |
| RACE | 36.19 |
| SCIQ | 55.78 |
| TOXIGEN | 78.29 |
Please refer to https://github.com/Slim205/Arabicllm/ for more details. |