xlm-roberta-base-amh-finetuned-augmentation-LUNAR
This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3820
- F1: 0.6576
- Roc Auc: 0.7940
- Accuracy: 0.5110
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.4179 | 1.0 | 238 | 0.4173 | 0.0 | 0.5 | 0.1735 |
0.3969 | 2.0 | 476 | 0.3760 | 0.2222 | 0.5646 | 0.2808 |
0.3427 | 3.0 | 714 | 0.3236 | 0.4125 | 0.6565 | 0.3922 |
0.2814 | 4.0 | 952 | 0.2987 | 0.5339 | 0.7071 | 0.4448 |
0.2655 | 5.0 | 1190 | 0.2953 | 0.5704 | 0.7415 | 0.4658 |
0.2185 | 6.0 | 1428 | 0.3105 | 0.5699 | 0.7519 | 0.4448 |
0.2206 | 7.0 | 1666 | 0.3094 | 0.5764 | 0.7509 | 0.4669 |
0.161 | 8.0 | 1904 | 0.3088 | 0.5871 | 0.7524 | 0.4879 |
0.1473 | 9.0 | 2142 | 0.3198 | 0.6278 | 0.7683 | 0.4921 |
0.1237 | 10.0 | 2380 | 0.3405 | 0.6264 | 0.7680 | 0.4774 |
0.1032 | 11.0 | 2618 | 0.3341 | 0.6362 | 0.7720 | 0.5079 |
0.0857 | 12.0 | 2856 | 0.3452 | 0.6521 | 0.7771 | 0.5058 |
0.0695 | 13.0 | 3094 | 0.3604 | 0.6552 | 0.7872 | 0.5058 |
0.0626 | 14.0 | 3332 | 0.3686 | 0.6472 | 0.7815 | 0.5089 |
0.0481 | 15.0 | 3570 | 0.3666 | 0.6477 | 0.7763 | 0.5121 |
0.0516 | 16.0 | 3808 | 0.3820 | 0.6576 | 0.7940 | 0.5110 |
0.0469 | 17.0 | 4046 | 0.3752 | 0.6493 | 0.7846 | 0.5121 |
0.0473 | 18.0 | 4284 | 0.3817 | 0.6448 | 0.7821 | 0.5100 |
0.0405 | 19.0 | 4522 | 0.3830 | 0.6529 | 0.7871 | 0.5110 |
0.0474 | 20.0 | 4760 | 0.3827 | 0.6519 | 0.7874 | 0.5100 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
- Downloads last month
- 26
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.