--- license: mit tags: - text-classification - generated_from_trainer metrics: - accuracy - f1 model-index: - name: deberta-v3-xsmall-finetuned-DAGPap22 results: [] --- # deberta-v3-xsmall-finetuned-DAGPap22 This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0798 - Accuracy: 0.9907 - F1: 0.9934 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 402 | 0.1626 | 0.9477 | 0.9616 | | 0.4003 | 2.0 | 804 | 0.0586 | 0.9794 | 0.9853 | | 0.1075 | 3.0 | 1206 | 0.0342 | 0.9907 | 0.9933 | | 0.0581 | 4.0 | 1608 | 0.1140 | 0.9776 | 0.9838 | | 0.0245 | 5.0 | 2010 | 0.1409 | 0.9776 | 0.9842 | | 0.0245 | 6.0 | 2412 | 0.0732 | 0.9832 | 0.9881 | | 0.0167 | 7.0 | 2814 | 0.1996 | 0.9682 | 0.9778 | | 0.0139 | 8.0 | 3216 | 0.1219 | 0.9850 | 0.9894 | | 0.006 | 9.0 | 3618 | 0.0670 | 0.9907 | 0.9934 | | 0.0067 | 10.0 | 4020 | 0.1036 | 0.9869 | 0.9907 | | 0.0067 | 11.0 | 4422 | 0.1220 | 0.9776 | 0.9838 | | 0.0041 | 12.0 | 4824 | 0.1768 | 0.9776 | 0.9839 | | 0.0007 | 13.0 | 5226 | 0.0943 | 0.9888 | 0.9920 | | 0.0 | 14.0 | 5628 | 0.0959 | 0.9907 | 0.9934 | | 0.0054 | 15.0 | 6030 | 0.0915 | 0.9888 | 0.9921 | | 0.0054 | 16.0 | 6432 | 0.1618 | 0.9794 | 0.9855 | | 0.0019 | 17.0 | 6834 | 0.0794 | 0.9907 | 0.9934 | | 0.0 | 18.0 | 7236 | 0.0799 | 0.9907 | 0.9934 | | 0.0 | 19.0 | 7638 | 0.0797 | 0.9907 | 0.9934 | | 0.0 | 20.0 | 8040 | 0.0798 | 0.9907 | 0.9934 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1