arabert_cross_vocabulary_task1_fold2
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.8384
- Qwk: 0.0355
- Mse: 0.8384
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.1176 | 2 | 4.6248 | -0.0324 | 4.6248 |
No log | 0.2353 | 4 | 1.7789 | 0.0201 | 1.7789 |
No log | 0.3529 | 6 | 0.8842 | 0.0565 | 0.8842 |
No log | 0.4706 | 8 | 0.8601 | -0.0731 | 0.8601 |
No log | 0.5882 | 10 | 0.8512 | -0.0838 | 0.8512 |
No log | 0.7059 | 12 | 0.7752 | -0.0229 | 0.7752 |
No log | 0.8235 | 14 | 0.7856 | 0.0496 | 0.7856 |
No log | 0.9412 | 16 | 0.7647 | 0.0550 | 0.7647 |
No log | 1.0588 | 18 | 0.8100 | 0.0 | 0.8100 |
No log | 1.1765 | 20 | 0.9168 | 0.0 | 0.9168 |
No log | 1.2941 | 22 | 0.8569 | 0.0 | 0.8569 |
No log | 1.4118 | 24 | 0.7547 | 0.0 | 0.7547 |
No log | 1.5294 | 26 | 0.7737 | 0.0 | 0.7737 |
No log | 1.6471 | 28 | 0.7574 | 0.0 | 0.7574 |
No log | 1.7647 | 30 | 0.7125 | 0.0550 | 0.7125 |
No log | 1.8824 | 32 | 0.7216 | 0.0 | 0.7216 |
No log | 2.0 | 34 | 1.0254 | -0.0072 | 1.0254 |
No log | 2.1176 | 36 | 1.2005 | 0.1411 | 1.2005 |
No log | 2.2353 | 38 | 1.0049 | -0.0072 | 1.0049 |
No log | 2.3529 | 40 | 0.7418 | 0.0140 | 0.7418 |
No log | 2.4706 | 42 | 0.7130 | 0.0441 | 0.7130 |
No log | 2.5882 | 44 | 0.7791 | 0.0 | 0.7791 |
No log | 2.7059 | 46 | 0.9107 | -0.0072 | 0.9107 |
No log | 2.8235 | 48 | 0.8828 | 0.0 | 0.8828 |
No log | 2.9412 | 50 | 0.7509 | 0.0268 | 0.7509 |
No log | 3.0588 | 52 | 0.7111 | 0.1173 | 0.7111 |
No log | 3.1765 | 54 | 0.7162 | 0.1309 | 0.7162 |
No log | 3.2941 | 56 | 0.7366 | 0.0386 | 0.7366 |
No log | 3.4118 | 58 | 0.8225 | 0.0 | 0.8225 |
No log | 3.5294 | 60 | 0.7895 | -0.0072 | 0.7895 |
No log | 3.6471 | 62 | 0.7926 | -0.0072 | 0.7926 |
No log | 3.7647 | 64 | 0.8031 | -0.0072 | 0.8031 |
No log | 3.8824 | 66 | 0.8109 | 0.0 | 0.8109 |
No log | 4.0 | 68 | 0.7670 | 0.0361 | 0.7670 |
No log | 4.1176 | 70 | 0.7062 | 0.0643 | 0.7062 |
No log | 4.2353 | 72 | 0.6998 | 0.1209 | 0.6998 |
No log | 4.3529 | 74 | 0.7169 | 0.0755 | 0.7169 |
No log | 4.4706 | 76 | 0.8345 | -0.0072 | 0.8345 |
No log | 4.5882 | 78 | 0.9136 | 0.0387 | 0.9136 |
No log | 4.7059 | 80 | 0.8757 | 0.0086 | 0.8757 |
No log | 4.8235 | 82 | 0.8750 | 0.0086 | 0.8750 |
No log | 4.9412 | 84 | 0.8295 | 0.0 | 0.8295 |
No log | 5.0588 | 86 | 0.7644 | 0.0069 | 0.7644 |
No log | 5.1765 | 88 | 0.7883 | 0.0069 | 0.7883 |
No log | 5.2941 | 90 | 0.8861 | 0.1343 | 0.8861 |
No log | 5.4118 | 92 | 0.8567 | -0.0144 | 0.8567 |
No log | 5.5294 | 94 | 0.7787 | 0.0480 | 0.7787 |
No log | 5.6471 | 96 | 0.7991 | 0.0361 | 0.7991 |
No log | 5.7647 | 98 | 0.8546 | 0.0361 | 0.8546 |
No log | 5.8824 | 100 | 0.8059 | 0.0434 | 0.8059 |
No log | 6.0 | 102 | 0.7338 | 0.0846 | 0.7338 |
No log | 6.1176 | 104 | 0.7167 | 0.1209 | 0.7167 |
No log | 6.2353 | 106 | 0.7205 | 0.1209 | 0.7205 |
No log | 6.3529 | 108 | 0.7597 | 0.0707 | 0.7597 |
No log | 6.4706 | 110 | 0.8598 | -0.0136 | 0.8598 |
No log | 6.5882 | 112 | 0.8616 | 0.0376 | 0.8616 |
No log | 6.7059 | 114 | 0.7980 | 0.0940 | 0.7980 |
No log | 6.8235 | 116 | 0.7853 | 0.0940 | 0.7853 |
No log | 6.9412 | 118 | 0.7673 | 0.1027 | 0.7673 |
No log | 7.0588 | 120 | 0.7550 | 0.1069 | 0.7550 |
No log | 7.1765 | 122 | 0.7670 | 0.1027 | 0.7670 |
No log | 7.2941 | 124 | 0.8362 | 0.0822 | 0.8362 |
No log | 7.4118 | 126 | 0.8895 | -0.0054 | 0.8895 |
No log | 7.5294 | 128 | 0.9417 | -0.0470 | 0.9417 |
No log | 7.6471 | 130 | 0.9102 | -0.0406 | 0.9102 |
No log | 7.7647 | 132 | 0.8290 | 0.0355 | 0.8290 |
No log | 7.8824 | 134 | 0.7603 | 0.0736 | 0.7603 |
No log | 8.0 | 136 | 0.7354 | 0.0643 | 0.7354 |
No log | 8.1176 | 138 | 0.7294 | 0.0643 | 0.7294 |
No log | 8.2353 | 140 | 0.7448 | 0.0596 | 0.7448 |
No log | 8.3529 | 142 | 0.7876 | -0.0216 | 0.7876 |
No log | 8.4706 | 144 | 0.8424 | 0.0081 | 0.8424 |
No log | 8.5882 | 146 | 0.8830 | 0.0295 | 0.8830 |
No log | 8.7059 | 148 | 0.8829 | 0.0295 | 0.8829 |
No log | 8.8235 | 150 | 0.8593 | 0.0295 | 0.8593 |
No log | 8.9412 | 152 | 0.8214 | 0.0355 | 0.8214 |
No log | 9.0588 | 154 | 0.7878 | 0.1217 | 0.7878 |
No log | 9.1765 | 156 | 0.7838 | 0.1255 | 0.7838 |
No log | 9.2941 | 158 | 0.7871 | 0.1255 | 0.7871 |
No log | 9.4118 | 160 | 0.7920 | 0.1255 | 0.7920 |
No log | 9.5294 | 162 | 0.7986 | 0.0388 | 0.7986 |
No log | 9.6471 | 164 | 0.8132 | 0.0479 | 0.8132 |
No log | 9.7647 | 166 | 0.8290 | 0.0355 | 0.8290 |
No log | 9.8824 | 168 | 0.8368 | 0.0355 | 0.8368 |
No log | 10.0 | 170 | 0.8384 | 0.0355 | 0.8384 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for salbatarni/arabert_cross_vocabulary_task1_fold2
Base model
aubmindlab/bert-base-arabertv02