Slim205 commited on
Commit
72db123
1 Parent(s): 51b658f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -12,6 +12,20 @@ The goal of this project is to adapt large language models for the Arabic langua
12
 
13
  This model is the 2B version. It was trained for 2 days on 1 A100 GPU using LoRA with a rank of 128, a learning rate of 1e-4, and a cosine learning rate schedule.
14
 
15
- | Model | Average ⬆️ | ACVA | AlGhafa | MMLU | EXAMS | ARC Challenge | ARC Easy | BOOLQ | COPA | HELLAWSWAG | OPENBOOK QA | PIQA | RACE | SCIQ | TOXIGEN |
16
- |---------------------|------------|-------|---------|-------|-------|---------------|----------|-------|-------|-------------|-------------|-------|-------|-------|---------|
17
- | Slim205/Barka-2b-it | 46.98 | 39.5 | 46.5 | 37.06 | 38.73 | 35.78 | 36.97 | 73.77 | 50 | 28.98 | 43.84 | 56.36 | 36.19 | 55.78 | 78.29 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  This model is the 2B version. It was trained for 2 days on 1 A100 GPU using LoRA with a rank of 128, a learning rate of 1e-4, and a cosine learning rate schedule.
14
 
15
+ | Metric | Slim205/Barka-2b-it |
16
+ |----------------------|---------------------|
17
+ | Average | 46.98 |
18
+ | ACVA | 39.5 |
19
+ | AlGhafa | 46.5 |
20
+ | MMLU | 37.06 |
21
+ | EXAMS | 38.73 |
22
+ | ARC Challenge | 35.78 |
23
+ | ARC Easy | 36.97 |
24
+ | BOOLQ | 73.77 |
25
+ | COPA | 50 |
26
+ | HELLAWSWAG | 28.98 |
27
+ | OPENBOOK QA | 43.84 |
28
+ | PIQA | 56.36 |
29
+ | RACE | 36.19 |
30
+ | SCIQ | 55.78 |
31
+ | TOXIGEN | 78.29 |