my_wikilingua_model2
This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.5821
- Rouge1: 0.2402
- Rouge2: 0.0747
- Rougel: 0.1991
- Rougelsum: 0.1993
- Gen Len: 18.82
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 100 | 2.6861 | 0.2284 | 0.0646 | 0.1832 | 0.183 | 18.9375 |
No log | 2.0 | 200 | 2.6137 | 0.2343 | 0.0704 | 0.1919 | 0.1916 | 18.84 |
No log | 3.0 | 300 | 2.5890 | 0.2384 | 0.0729 | 0.1967 | 0.1966 | 18.88 |
No log | 4.0 | 400 | 2.5821 | 0.2402 | 0.0747 | 0.1991 | 0.1993 | 18.82 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support