afro-xlmr-large-76L-afr-DAPT-finetuned-20-epochs
This model is a fine-tuned version of Davlan/afro-xlmr-large-76L on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2345
- F1: 0.8736
- Roc Auc: 0.9183
- Accuracy: 0.7733
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.5348 | 1.0 | 81 | 0.4887 | 0.1383 | 0.5513 | 0.1708 |
0.3047 | 2.0 | 162 | 0.2438 | 0.8299 | 0.8772 | 0.7034 |
0.1832 | 3.0 | 243 | 0.2173 | 0.8636 | 0.9047 | 0.7143 |
0.1559 | 4.0 | 324 | 0.2018 | 0.8632 | 0.9072 | 0.7360 |
0.1441 | 5.0 | 405 | 0.2017 | 0.8600 | 0.9012 | 0.7422 |
0.1193 | 6.0 | 486 | 0.2044 | 0.8590 | 0.9017 | 0.7407 |
0.0899 | 7.0 | 567 | 0.2345 | 0.8736 | 0.9183 | 0.7733 |
0.0793 | 8.0 | 648 | 0.2243 | 0.8688 | 0.9142 | 0.7484 |
0.0593 | 9.0 | 729 | 0.2439 | 0.8500 | 0.8945 | 0.7174 |
0.0524 | 10.0 | 810 | 0.2493 | 0.8610 | 0.9054 | 0.7422 |
0.054 | 11.0 | 891 | 0.2408 | 0.8672 | 0.9087 | 0.7578 |
Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 26
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for sercetexam9/afro-xlmr-large-76L-afr-DAPT-finetuned-20-epochs
Base model
Davlan/afro-xlmr-large-76L