YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

gpt2-medium-nlg-tm1_tm2_tm3

This model is a fine-tuned version of GPT2-medium on TaskMaster1, TaskMaster2 and TaskMaster3

Refer to ConvLab-3 for model description and usage.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-5
  • train_batch_size: 64
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • optimizer: AdamW
  • lr_scheduler_type: linear
  • num_epochs: 20

Framework versions

  • Transformers 4.23.1
  • Pytorch 1.10.1+cu111
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.