finetuned-baseline-phase-0.0
This model is a fine-tuned version of valhalla/t5-small-e2e-qg on the None dataset. It achieves the following results on the evaluation set:
- Loss: 4.0205
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
5.2173 | 0.66 | 5 | 4.8296 |
4.8604 | 1.32 | 10 | 4.5708 |
4.6755 | 1.98 | 15 | 4.4653 |
4.6046 | 2.64 | 20 | 4.4006 |
4.5457 | 3.31 | 25 | 4.3465 |
4.502 | 3.97 | 30 | 4.2920 |
4.4677 | 4.63 | 35 | 4.2398 |
4.3849 | 5.29 | 40 | 4.2034 |
4.3815 | 5.95 | 45 | 4.1794 |
4.3412 | 6.61 | 50 | 4.1628 |
4.3026 | 7.27 | 55 | 4.1417 |
4.3104 | 7.93 | 60 | 4.1198 |
4.2791 | 8.6 | 65 | 4.1001 |
4.2523 | 9.26 | 70 | 4.0855 |
4.235 | 9.92 | 75 | 4.0724 |
4.2201 | 10.58 | 80 | 4.0610 |
4.1716 | 11.24 | 85 | 4.0534 |
4.2005 | 11.9 | 90 | 4.0489 |
4.1902 | 12.56 | 95 | 4.0450 |
4.1632 | 13.22 | 100 | 4.0399 |
4.1467 | 13.88 | 105 | 4.0349 |
4.1347 | 14.55 | 110 | 4.0310 |
4.1606 | 15.21 | 115 | 4.0277 |
4.1425 | 15.87 | 120 | 4.0255 |
4.1289 | 16.53 | 125 | 4.0235 |
4.126 | 17.19 | 130 | 4.0218 |
4.1551 | 17.85 | 135 | 4.0209 |
4.1567 | 18.51 | 140 | 4.0205 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.