metadata
license: apache-2.0
tags:
- generated_from_trainer
- email generation
- email
datasets:
- aeslc
- postbot/multi_emails_kw
widget:
- text: Thursday pay invoice need asap thanks Pierre good morning dear Harold
example_title: invoice
- text: dear elia when will space be ready need urgently regards ronald
example_title: space ready
- text: >-
Tuesday I need review document before leaves our company need know when
leave
example_title: review document
- text: dear bob will back wednesday need urgently regards elena
example_title: return wednesday
- text: dear mary thanks for your last invoice need know when payment be
example_title: last invoice
- text: dear william I out yesterday received message today will get back today
example_title: message
- text: >-
dear joseph have all invoices ready Monday next invoice in 30 days have
great weekend
example_title: next invoice
- text: >-
dear mary I have couple questions on new contract we agreed on need know
thoughts regarding contract
example_title: contract
- text: Friday will make report due soon please thanks dear john
example_title: report due soon
- text: >-
need take photos sunday want finish thursday photo exhibition need urgent
help thanks dear john
example_title: photo exhibition
- text: Tuesday need talk with you important stuff
example_title: important talk
- text: dear maria how are you doing thanks very much
example_title: thanks
- text: >-
dear james tomorrow will prepare file for june report before leave need
know when leave
example_title: file for june report
parameters:
min_length: 16
max_length: 256
no_repeat_ngram_size: 2
do_sample: false
num_beams: 8
early_stopping: true
repetition_penalty: 2.5
length_penalty: 0.9
t5-small-kw2email-v2
This model is a fine-tuned version of postbot/t5-small-kw2email on the None dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 4
Training results
Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1