touring2
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.4042
- Rouge1: 60.0314
- Rouge2: 42.51
- Rougel: 59.8461
- Rougelsum: 59.6885
- Gen Len: 9.6526
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
1.6599 | 1.0 | 1249 | 1.4452 | 52.1368 | 34.9356 | 51.6433 | 51.7033 | 9.4643 |
1.2659 | 2.0 | 2498 | 1.4013 | 53.5023 | 35.6721 | 53.0881 | 53.1954 | 9.6526 |
1.1027 | 3.0 | 3747 | 1.3475 | 59.004 | 41.5484 | 58.8625 | 58.785 | 9.6818 |
0.9453 | 4.0 | 4996 | 1.3966 | 58.5942 | 40.7989 | 58.3943 | 58.3703 | 9.7516 |
0.9083 | 5.0 | 6245 | 1.4042 | 60.0314 | 42.51 | 59.8461 | 59.6885 | 9.6526 |
Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.