chunwoolee0/distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of distilroberta-base on an wikitext,wikitext-2-raw-v1 dataset. It achieves the following results on the evaluation set:
- Train Loss: 2.1557
- Validation Loss: 1.8964
- Epoch: 0
Model description
This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT.
Intended uses & limitations
This is an exercise for finetuning of nlp language modeling for fill-mask.
Training and evaluation data
Wikitext, wikitext-2-raw-v1 is used
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
2.1557 | 1.8964 | 0 |
Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
- Downloads last month
- 24
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for chunwoolee0/distilroberta-base-finetuned-wikitext2
Base model
distilbert/distilroberta-base