--- base_model: klue/roberta-large tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: mango-32-0.00002-10-fin results: [] --- # mango-32-0.00002-10-fin This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5883 - Accuracy: 0.6357 - F1: 0.6324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 233 | 1.7759 | 0.6095 | 0.6127 | | No log | 2.0 | 466 | 1.8463 | 0.6030 | 0.5997 | | 0.1567 | 3.0 | 699 | 1.8531 | 0.6297 | 0.6194 | | 0.1567 | 4.0 | 932 | 2.0262 | 0.6183 | 0.6180 | | 0.11 | 5.0 | 1165 | 2.1822 | 0.6167 | 0.6193 | | 0.11 | 6.0 | 1398 | 2.3360 | 0.6380 | 0.6294 | | 0.0622 | 7.0 | 1631 | 2.3473 | 0.6312 | 0.6286 | | 0.0622 | 8.0 | 1864 | 2.5031 | 0.6319 | 0.6283 | | 0.0294 | 9.0 | 2097 | 2.5552 | 0.6359 | 0.6315 | | 0.0294 | 10.0 | 2330 | 2.5883 | 0.6357 | 0.6324 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0a0+b5021ba - Datasets 2.6.2 - Tokenizers 0.14.1