metadata
library_name: transformers
license: bsd-3-clause
base_model: pszemraj/led-base-book-summary
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: LED-cnn-dataset-summarization
results: []
LED-cnn-dataset-summarization
This model is a fine-tuned version of pszemraj/led-base-book-summary on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.0098
- Rouge1: 0.4061
- Rouge2: 0.1676
- Rougel: 0.2695
- Rougelsum: 0.3756
- Gen Len: 79.036
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 250 | 1.8883 | 0.4074 | 0.1733 | 0.2733 | 0.3741 | 81.696 |
1.9196 | 2.0 | 500 | 1.8782 | 0.4105 | 0.1738 | 0.2735 | 0.3789 | 85.312 |
1.9196 | 3.0 | 750 | 1.8763 | 0.408 | 0.1734 | 0.2747 | 0.3754 | 84.348 |
1.4188 | 4.0 | 1000 | 1.9043 | 0.4086 | 0.1716 | 0.273 | 0.3795 | 79.842 |
1.4188 | 5.0 | 1250 | 1.9344 | 0.4084 | 0.1686 | 0.2713 | 0.377 | 79.926 |
1.168 | 6.0 | 1500 | 1.9623 | 0.4121 | 0.1733 | 0.2749 | 0.3813 | 77.228 |
1.168 | 7.0 | 1750 | 2.0004 | 0.4092 | 0.1711 | 0.273 | 0.3794 | 77.102 |
1.0279 | 8.0 | 2000 | 2.0098 | 0.4061 | 0.1676 | 0.2695 | 0.3756 | 79.036 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1