--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall base_model: CAMeL-Lab/bert-base-arabic-camelbert-ca model-index: - name: POEMS-CAMELBERT-CA-RUN4-20-fullDatafreez results: [] --- # POEMS-CAMELBERT-CA-RUN4-20-fullDatafreez This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-ca](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7801 - Accuracy: 0.6392 - F1: 0.6392 - Precision: 0.6392 - Recall: 0.6392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.1829 | 1.0 | 568 | 1.0365 | 0.5576 | 0.5576 | 0.5576 | 0.5576 | | 1.0039 | 2.0 | 1136 | 0.9696 | 0.5895 | 0.5895 | 0.5895 | 0.5895 | | 0.8669 | 3.0 | 1704 | 1.0291 | 0.5838 | 0.5838 | 0.5838 | 0.5838 | | 0.7361 | 4.0 | 2272 | 1.0234 | 0.6024 | 0.6024 | 0.6024 | 0.6024 | | 0.6082 | 5.0 | 2840 | 1.0185 | 0.6219 | 0.6219 | 0.6219 | 0.6219 | | 0.4956 | 6.0 | 3408 | 1.1033 | 0.6219 | 0.6219 | 0.6219 | 0.6219 | | 0.3904 | 7.0 | 3976 | 1.2671 | 0.6055 | 0.6055 | 0.6055 | 0.6055 | | 0.3101 | 8.0 | 4544 | 1.3479 | 0.6228 | 0.6228 | 0.6228 | 0.6228 | | 0.2398 | 9.0 | 5112 | 1.5939 | 0.6175 | 0.6175 | 0.6175 | 0.6175 | | 0.1912 | 10.0 | 5680 | 1.6243 | 0.6188 | 0.6188 | 0.6188 | 0.6188 | | 0.1523 | 11.0 | 6248 | 1.6882 | 0.6321 | 0.6321 | 0.6321 | 0.6321 | | 0.1244 | 12.0 | 6816 | 1.7997 | 0.6405 | 0.6405 | 0.6405 | 0.6405 | | 0.0994 | 13.0 | 7384 | 2.0695 | 0.6472 | 0.6472 | 0.6472 | 0.6472 | | 0.0931 | 14.0 | 7952 | 2.2890 | 0.6356 | 0.6356 | 0.6356 | 0.6356 | | 0.0739 | 15.0 | 8520 | 2.4964 | 0.6170 | 0.6170 | 0.6170 | 0.6170 | | 0.0643 | 16.0 | 9088 | 2.5723 | 0.6387 | 0.6387 | 0.6387 | 0.6387 | | 0.0523 | 17.0 | 9656 | 2.5685 | 0.6410 | 0.6410 | 0.6410 | 0.6410 | | 0.0365 | 18.0 | 10224 | 2.6118 | 0.6374 | 0.6374 | 0.6374 | 0.6374 | | 0.0372 | 19.0 | 10792 | 2.8254 | 0.6352 | 0.6352 | 0.6352 | 0.6352 | | 0.0301 | 20.0 | 11360 | 2.7801 | 0.6392 | 0.6392 | 0.6392 | 0.6392 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2