vit5-base-vietnews-summarization-standardized-number

This model is a fine-tuned version of VietAI/vit5-base-vietnews-summarization on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8266
  • Rouge1: 80.3861
  • Rouge2: 75.4112
  • Rougel: 80.0172
  • Rougelsum: 79.9721
  • Gen Len: 7.7504

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 297 0.6401 77.9012 72.733 77.2694 77.2675 8.9022
0.7223 2.0 594 0.5931 78.4299 73.3202 78.0444 78.0744 8.7437
0.7223 3.0 891 0.6182 79.3604 74.2323 78.907 78.9878 7.5497
0.3472 4.0 1188 0.6134 79.7284 74.8063 79.3057 79.3545 8.5245
0.3472 5.0 1485 0.6625 79.7185 74.5798 79.3356 79.3615 8.2749
0.2388 6.0 1782 0.7068 79.7276 74.6552 79.2519 79.2253 8.0489
0.1551 7.0 2079 0.7646 80.3805 75.394 79.9394 79.8915 7.9983
0.1551 8.0 2376 0.7736 81.2428 76.358 80.8152 80.8272 7.6121
0.1127 9.0 2673 0.8175 80.8907 76.0278 80.3848 80.3953 7.6543
0.1127 10.0 2970 0.8266 80.3861 75.4112 80.0172 79.9721 7.7504

Framework versions

  • Transformers 4.33.0
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ThuyNT03/vit5-base-vietnews-summarization-standardized-number

Finetuned
(8)
this model