--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_sumarize_model results: [] --- # my_awesome_sumarize_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2874 - Rouge1: 0.3656 - Rouge2: 0.2555 - Rougel: 0.3544 - Rougelsum: 0.3512 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 4 | 1.4360 | 0.3727 | 0.2518 | 0.3419 | 0.3401 | 19.0 | | No log | 2.0 | 8 | 1.4087 | 0.3389 | 0.2309 | 0.3249 | 0.3216 | 18.65 | | No log | 3.0 | 12 | 1.3843 | 0.3389 | 0.2309 | 0.3249 | 0.3216 | 18.65 | | No log | 4.0 | 16 | 1.3659 | 0.355 | 0.2429 | 0.3394 | 0.3373 | 19.0 | | No log | 5.0 | 20 | 1.3533 | 0.3656 | 0.251 | 0.3489 | 0.3474 | 19.0 | | No log | 6.0 | 24 | 1.3431 | 0.3761 | 0.2618 | 0.3599 | 0.3569 | 19.0 | | No log | 7.0 | 28 | 1.3323 | 0.3761 | 0.2618 | 0.3599 | 0.3569 | 19.0 | | No log | 8.0 | 32 | 1.3239 | 0.3764 | 0.2667 | 0.3651 | 0.362 | 19.0 | | No log | 9.0 | 36 | 1.3174 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 10.0 | 40 | 1.3113 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 11.0 | 44 | 1.3061 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 12.0 | 48 | 1.3024 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 13.0 | 52 | 1.2983 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 14.0 | 56 | 1.2949 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 15.0 | 60 | 1.2921 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 16.0 | 64 | 1.2904 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 17.0 | 68 | 1.2890 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 18.0 | 72 | 1.2881 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 19.0 | 76 | 1.2876 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | | No log | 20.0 | 80 | 1.2874 | 0.3656 | 0.2555 | 0.3544 | 0.3512 | 19.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2