File size: 448 Bytes
04fac8a
 
 
 
 
 
 
 
5906e92
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---
license: mit
datasets:
- Slim205/total_data_baraka_ift
language:
- ar
base_model:
- google/gemma-2-2b-it
---

The goal of this project is to adapt large language models for the Arabic language. Due to the scarcity of Arabic instruction fine-tuning data, the focus is on creating a high-quality instruction fine-tuning (IFT) dataset. The project aims to finetune models on this dataset and evaluate their performance across various benchmarks.