Edit model card

pegasus-x-large-finetuned-summarization

This model is a fine-tuned version of google/pegasus-x-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9503
  • Rouge1: 54.656
  • Rouge2: 33.2773
  • Rougel: 44.7797
  • Rougelsum: 51.2888

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1.1821 1.0 308 0.9389 49.6848 29.0753 40.9828 47.1619
0.8932 2.0 616 0.8955 49.6176 28.8588 41.7149 47.3719
0.7433 3.0 924 0.9202 54.0016 31.8254 43.4441 50.9312
0.6495 4.0 1232 0.9321 52.6912 31.6843 43.8896 49.8726
0.587 5.0 1540 0.9503 54.656 33.2773 44.7797 51.2888

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.0+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.