t5-base-asqa-ob / README.md
din0s's picture
Librarian Bot: Update dataset YAML metadata for model (#1)
9a0ee4a
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets: din0s/asqa
model-index:
  - name: t5-base-asqa-ob
    results: []

t5-base-asqa-ob

This model is a fine-tuned version of t5-base on the ASQA dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7356
  • Rougelsum: 12.0879

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rougelsum
No log 1.0 355 1.8545 11.6549
2.4887 2.0 710 1.8050 11.7533
1.9581 3.0 1065 1.7843 11.8327
1.9581 4.0 1420 1.7722 11.9442
1.9252 5.0 1775 1.7648 11.9331
1.8853 6.0 2130 1.7567 11.9788
1.8853 7.0 2485 1.7519 12.0300
1.8512 8.0 2840 1.7483 12.0225
1.8328 9.0 3195 1.7451 12.0402
1.8115 10.0 3550 1.7436 12.0444
1.8115 11.0 3905 1.7419 12.0850
1.7878 12.0 4260 1.7408 12.1047
1.774 13.0 4615 1.7394 12.0839
1.774 14.0 4970 1.7390 12.0910
1.7787 15.0 5325 1.7381 12.0880
1.7632 16.0 5680 1.7380 12.1088
1.7623 17.0 6035 1.7370 12.1046
1.7623 18.0 6390 1.7368 12.0997
1.7508 19.0 6745 1.7359 12.0902
1.7597 20.0 7100 1.7356 12.0879

Framework versions

  • Transformers 4.23.0.dev0
  • Pytorch 1.12.1+cu102
  • Datasets 2.4.0
  • Tokenizers 0.12.1