my-t5-summarization-model

This model is a fine-tuned version of google-t5/t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2478

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
17.2263 1.0 74 20.5598
10.8174 2.0 148 12.3219
2.9638 3.0 222 0.6010
1.4073 4.0 296 0.2598
1.1503 5.0 370 0.2604
1.5418 6.0 444 0.2595
0.6755 7.0 518 0.2574
0.833 8.0 592 0.2550
0.5194 9.0 666 0.2549
0.7579 10.0 740 0.2553
0.6712 11.0 814 0.2545
0.6565 12.0 888 0.2540
0.262 13.0 962 0.2529
0.3144 14.0 1036 0.2518
0.6226 15.0 1110 0.2509
0.5695 16.0 1184 0.2498
0.2856 17.0 1258 0.2490
0.2869 18.0 1332 0.2484
0.6049 19.0 1406 0.2480
0.6421 20.0 1480 0.2478

Framework versions

  • PEFT 0.12.0
  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
1
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for rutvikd0512/my-t5-summarization-model

Base model

google-t5/t5-base
Adapter
(38)
this model

Dataset used to train rutvikd0512/my-t5-summarization-model