bert-31
This model is a fine-tuned version of hung200504/bert-21 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 9.6564
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
10.8389 | 0.09 | 5 | 11.6872 |
10.6506 | 0.18 | 10 | 11.3129 |
9.9367 | 0.27 | 15 | 10.9748 |
9.7504 | 0.36 | 20 | 10.6767 |
9.4789 | 0.45 | 25 | 10.4164 |
9.3145 | 0.55 | 30 | 10.1941 |
8.4653 | 0.64 | 35 | 10.0120 |
8.9579 | 0.73 | 40 | 9.8679 |
8.3746 | 0.82 | 45 | 9.7613 |
8.3205 | 0.91 | 50 | 9.6904 |
8.0754 | 1.0 | 55 | 9.6564 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
- Downloads last month
- 22
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.