t5-base-question-answer-summarization

This model is a fine-tuned version of google-t5/t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1420
  • Rouge1: 87.2659
  • Rouge2: 79.1621
  • Rougel: 84.0716
  • Rougelsum: 84.0332

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.3593 1.0 450 0.1339 87.0068 78.4882 83.5134 83.4528
0.121 2.0 900 0.1273 87.3363 79.1644 83.7472 83.7456
0.0982 3.0 1350 0.1314 87.0066 78.3475 83.0262 82.9739
0.084 4.0 1800 0.1322 87.1678 78.7514 83.4642 83.441
0.074 5.0 2250 0.1345 87.2618 79.114 83.9859 83.9444
0.0685 6.0 2700 0.1378 87.1497 79.0628 83.958 83.9482
0.0609 7.0 3150 0.1419 86.993 78.781 83.8076 83.7681
0.0591 8.0 3600 0.1420 87.2659 79.1621 84.0716 84.0332

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
4
Safetensors
Model size
223M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for JohnDoe70/t5-summarization

Base model

google-t5/t5-base
Quantized
(6)
this model