results
This model is a fine-tuned version of sshleifer/distilbart-xsum-12-3 on the News-summary-Kaggle dataset. It achieves the following results on the evaluation set:
- Loss: 3.1426
- Rouge1: 51.2701
- Rouge2: 28.3575
- Rougel: 37.9263
- Rougelsum: 45.8934
- Gen Len: 75.777
Model description
This model use pre-trained model: sshleifer/distilbart-xsum-12-3 fined tuned on the datasets: News-summary-Kaggle. Our aims is to build model can summerize news efficiently.
Intended uses & limitations
More information needed
Training and evaluation data
News-summary. Link: https://www.kaggle.com/datasets/sunnysai12345/news-summary
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- label_smoothing_factor: 0.1
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
3.4812 | 1.0 | 425 | 3.3209 | 47.7226 | 26.3282 | 35.5063 | 42.5426 | 66.523 |
3.2269 | 2.0 | 850 | 3.1838 | 50.4271 | 27.7047 | 37.2638 | 45.1897 | 77.115 |
2.9504 | 3.0 | 1275 | 3.1401 | 50.6362 | 28.2773 | 37.6 | 45.4901 | 74.992 |
2.8014 | 4.0 | 1700 | 3.1346 | 51.2942 | 28.4684 | 38.0877 | 46.0386 | 74.299 |
2.71 | 5.0 | 2125 | 3.1426 | 51.2701 | 28.3575 | 37.9263 | 45.8934 | 75.777 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for LA1512/fine-tuned-distilbart-xsum-12-3-news-summary
Base model
sshleifer/distilbart-xsum-12-3