arabert_cross_development_task1_fold6

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6885
  • Qwk: 0.4652
  • Mse: 0.6865

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1333 2 2.5610 0.0172 2.5631
No log 0.2667 4 1.3092 0.0773 1.3077
No log 0.4 6 0.6287 0.4047 0.6286
No log 0.5333 8 0.7953 0.4436 0.7955
No log 0.6667 10 0.6050 0.3695 0.6046
No log 0.8 12 0.5414 0.3312 0.5412
No log 0.9333 14 0.3774 0.5165 0.3781
No log 1.0667 16 0.3713 0.5340 0.3720
No log 1.2 18 0.3244 0.5756 0.3250
No log 1.3333 20 0.3167 0.6313 0.3171
No log 1.4667 22 0.3711 0.6258 0.3708
No log 1.6 24 0.3620 0.6705 0.3621
No log 1.7333 26 0.3546 0.6894 0.3547
No log 1.8667 28 0.3904 0.5542 0.3900
No log 2.0 30 0.4427 0.4960 0.4420
No log 2.1333 32 0.4085 0.5287 0.4081
No log 2.2667 34 0.3250 0.6490 0.3251
No log 2.4 36 0.3625 0.7352 0.3630
No log 2.5333 38 0.3719 0.7111 0.3719
No log 2.6667 40 0.4472 0.5221 0.4462
No log 2.8 42 0.5148 0.4902 0.5136
No log 2.9333 44 0.4050 0.5405 0.4043
No log 3.0667 46 0.3378 0.6397 0.3376
No log 3.2 48 0.3496 0.5949 0.3493
No log 3.3333 50 0.3447 0.6021 0.3444
No log 3.4667 52 0.3460 0.5885 0.3457
No log 3.6 54 0.3308 0.6420 0.3306
No log 3.7333 56 0.3377 0.6102 0.3374
No log 3.8667 58 0.3579 0.5907 0.3575
No log 4.0 60 0.3816 0.5790 0.3810
No log 4.1333 62 0.4251 0.5479 0.4242
No log 4.2667 64 0.4317 0.5444 0.4308
No log 4.4 66 0.5698 0.4745 0.5684
No log 4.5333 68 0.5338 0.4833 0.5325
No log 4.6667 70 0.4617 0.5347 0.4607
No log 4.8 72 0.4937 0.4804 0.4925
No log 4.9333 74 0.5167 0.4702 0.5156
No log 5.0667 76 0.4987 0.4743 0.4977
No log 5.2 78 0.5594 0.4792 0.5582
No log 5.3333 80 0.5679 0.4788 0.5667
No log 5.4667 82 0.5202 0.4874 0.5190
No log 5.6 84 0.5297 0.4891 0.5284
No log 5.7333 86 0.4835 0.4882 0.4825
No log 5.8667 88 0.5151 0.4891 0.5140
No log 6.0 90 0.6542 0.4666 0.6526
No log 6.1333 92 0.7260 0.4338 0.7242
No log 6.2667 94 0.5806 0.4788 0.5791
No log 6.4 96 0.4674 0.5006 0.4664
No log 6.5333 98 0.4558 0.5000 0.4549
No log 6.6667 100 0.5518 0.4839 0.5504
No log 6.8 102 0.6844 0.4344 0.6825
No log 6.9333 104 0.6391 0.4542 0.6374
No log 7.0667 106 0.5221 0.5072 0.5208
No log 7.2 108 0.5030 0.5053 0.5018
No log 7.3333 110 0.5677 0.4910 0.5661
No log 7.4667 112 0.6657 0.4587 0.6638
No log 7.6 114 0.6913 0.4396 0.6893
No log 7.7333 116 0.6322 0.4662 0.6303
No log 7.8667 118 0.5615 0.4847 0.5599
No log 8.0 120 0.5037 0.5192 0.5025
No log 8.1333 122 0.4986 0.5275 0.4974
No log 8.2667 124 0.5375 0.4975 0.5361
No log 8.4 126 0.6271 0.4694 0.6252
No log 8.5333 128 0.7181 0.4380 0.7160
No log 8.6667 130 0.7262 0.4320 0.7241
No log 8.8 132 0.6675 0.4566 0.6655
No log 8.9333 134 0.5875 0.4826 0.5859
No log 9.0667 136 0.5616 0.4930 0.5602
No log 9.2 138 0.5683 0.4914 0.5668
No log 9.3333 140 0.5908 0.4826 0.5892
No log 9.4667 142 0.6226 0.4814 0.6208
No log 9.6 144 0.6525 0.4656 0.6507
No log 9.7333 146 0.6742 0.4610 0.6723
No log 9.8667 148 0.6879 0.4652 0.6859
No log 10.0 150 0.6885 0.4652 0.6865

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
16
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_development_task1_fold6

Finetuned
(2415)
this model