| | ---
|
| | library_name: transformers
|
| | license: mit
|
| | base_model: nielsr/lilt-xlm-roberta-base
|
| | tags:
|
| | - generated_from_trainer
|
| | metrics:
|
| | - precision
|
| | - recall
|
| | - f1
|
| | - accuracy
|
| | model-index:
|
| | - name: test
|
| | results: []
|
| | ---
|
| |
|
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| | should probably proofread and complete it, then remove this comment. -->
|
| |
|
| | # test
|
| |
|
| | This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on an unknown dataset.
|
| | It achieves the following results on the evaluation set:
|
| | - Loss: 1.6605
|
| | - Precision: 0.7460
|
| | - Recall: 0.7692
|
| | - F1: 0.7575
|
| | - Accuracy: 0.7526
|
| |
|
| | ## Model description
|
| |
|
| | More information needed
|
| |
|
| | ## Intended uses & limitations
|
| |
|
| | More information needed
|
| |
|
| | ## Training and evaluation data
|
| |
|
| | More information needed
|
| |
|
| | ## Training procedure
|
| |
|
| | ### Training hyperparameters
|
| |
|
| | The following hyperparameters were used during training:
|
| | - learning_rate: 5e-05
|
| | - train_batch_size: 8
|
| | - eval_batch_size: 8
|
| | - seed: 42
|
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| | - lr_scheduler_type: linear
|
| | - num_epochs: 30
|
| |
|
| | ### Training results
|
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|
| | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
|
| | | No log | 1.3333 | 100 | 1.0907 | 0.4971 | 0.6210 | 0.5522 | 0.5889 |
|
| | | No log | 2.6667 | 200 | 0.7374 | 0.6135 | 0.6857 | 0.6476 | 0.7475 |
|
| | | No log | 4.0 | 300 | 0.8119 | 0.6292 | 0.7193 | 0.6713 | 0.7490 |
|
| | | No log | 5.3333 | 400 | 0.8152 | 0.6930 | 0.7555 | 0.7229 | 0.7616 |
|
| | | 0.6197 | 6.6667 | 500 | 0.9915 | 0.6824 | 0.7682 | 0.7227 | 0.7458 |
|
| | | 0.6197 | 8.0 | 600 | 1.0589 | 0.6952 | 0.7809 | 0.7356 | 0.7680 |
|
| | | 0.6197 | 9.3333 | 700 | 1.1514 | 0.7072 | 0.7285 | 0.7177 | 0.7456 |
|
| | | 0.6197 | 10.6667 | 800 | 1.1828 | 0.7190 | 0.7652 | 0.7414 | 0.7625 |
|
| | | 0.6197 | 12.0 | 900 | 1.2011 | 0.7301 | 0.7606 | 0.7450 | 0.7679 |
|
| | | 0.0998 | 13.3333 | 1000 | 1.2323 | 0.7347 | 0.7662 | 0.7501 | 0.7622 |
|
| | | 0.0998 | 14.6667 | 1100 | 1.3060 | 0.7413 | 0.7881 | 0.7640 | 0.7688 |
|
| | | 0.0998 | 16.0 | 1200 | 1.3649 | 0.7337 | 0.7636 | 0.7484 | 0.7647 |
|
| | | 0.0998 | 17.3333 | 1300 | 1.3661 | 0.7319 | 0.7789 | 0.7547 | 0.7685 |
|
| | | 0.0998 | 18.6667 | 1400 | 1.4831 | 0.7386 | 0.7672 | 0.7526 | 0.7635 |
|
| | | 0.0226 | 20.0 | 1500 | 1.4216 | 0.7299 | 0.7682 | 0.7486 | 0.7654 |
|
| | | 0.0226 | 21.3333 | 1600 | 1.5146 | 0.7295 | 0.7733 | 0.7507 | 0.7539 |
|
| | | 0.0226 | 22.6667 | 1700 | 1.6595 | 0.7398 | 0.7748 | 0.7569 | 0.7476 |
|
| | | 0.0226 | 24.0 | 1800 | 1.5785 | 0.7609 | 0.7702 | 0.7656 | 0.7677 |
|
| | | 0.0226 | 25.3333 | 1900 | 1.5824 | 0.7544 | 0.7886 | 0.7711 | 0.7587 |
|
| | | 0.0057 | 26.6667 | 2000 | 1.6605 | 0.7460 | 0.7692 | 0.7575 | 0.7526 |
|
| | | 0.0057 | 28.0 | 2100 | 1.6459 | 0.7396 | 0.7697 | 0.7544 | 0.7520 |
|
| | | 0.0057 | 29.3333 | 2200 | 1.6605 | 0.7467 | 0.7748 | 0.7605 | 0.7541 |
|
| |
|
| |
|
| | ### Framework versions
|
| |
|
| | - Transformers 4.44.2
|
| | - Pytorch 2.4.1+cpu
|
| | - Datasets 3.0.0
|
| | - Tokenizers 0.19.1
|
| |
|