ASAP_FineTuningBERT_AugV8_k1_task1_organization_k1_fold3
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7391
- Qwk: 0.5287
- Mse: 0.7399
- Rmse: 0.8602
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
---|---|---|---|---|---|---|
No log | 1.0 | 2 | 10.3196 | 0.0014 | 10.3178 | 3.2121 |
No log | 2.0 | 4 | 7.2215 | 0.0 | 7.2202 | 2.6870 |
No log | 3.0 | 6 | 5.3307 | 0.0161 | 5.3295 | 2.3086 |
No log | 4.0 | 8 | 4.0710 | 0.0 | 4.0699 | 2.0174 |
No log | 5.0 | 10 | 3.3146 | 0.0 | 3.3134 | 1.8203 |
No log | 6.0 | 12 | 2.5785 | -0.0111 | 2.5776 | 1.6055 |
No log | 7.0 | 14 | 2.0507 | 0.0266 | 2.0499 | 1.4318 |
No log | 8.0 | 16 | 1.6022 | 0.0202 | 1.6015 | 1.2655 |
No log | 9.0 | 18 | 1.2505 | 0.0266 | 1.2500 | 1.1180 |
No log | 10.0 | 20 | 1.3209 | 0.0202 | 1.3204 | 1.1491 |
No log | 11.0 | 22 | 1.0664 | 0.0717 | 1.0661 | 1.0325 |
No log | 12.0 | 24 | 1.0138 | 0.3620 | 1.0139 | 1.0069 |
No log | 13.0 | 26 | 0.8288 | 0.5269 | 0.8291 | 0.9105 |
No log | 14.0 | 28 | 0.8755 | 0.2203 | 0.8753 | 0.9356 |
No log | 15.0 | 30 | 1.0460 | 0.2004 | 1.0458 | 1.0227 |
No log | 16.0 | 32 | 0.9638 | 0.3144 | 0.9639 | 0.9818 |
No log | 17.0 | 34 | 0.8350 | 0.5351 | 0.8354 | 0.9140 |
No log | 18.0 | 36 | 0.8384 | 0.5377 | 0.8388 | 0.9159 |
No log | 19.0 | 38 | 0.8951 | 0.4324 | 0.8954 | 0.9462 |
No log | 20.0 | 40 | 0.8969 | 0.4082 | 0.8973 | 0.9472 |
No log | 21.0 | 42 | 0.8200 | 0.5473 | 0.8205 | 0.9058 |
No log | 22.0 | 44 | 0.7806 | 0.5406 | 0.7811 | 0.8838 |
No log | 23.0 | 46 | 0.7539 | 0.4906 | 0.7545 | 0.8686 |
No log | 24.0 | 48 | 0.7043 | 0.5153 | 0.7048 | 0.8395 |
No log | 25.0 | 50 | 0.6591 | 0.5214 | 0.6596 | 0.8122 |
No log | 26.0 | 52 | 0.6587 | 0.5178 | 0.6593 | 0.8120 |
No log | 27.0 | 54 | 0.6943 | 0.5135 | 0.6951 | 0.8337 |
No log | 28.0 | 56 | 0.6894 | 0.5219 | 0.6902 | 0.8308 |
No log | 29.0 | 58 | 0.7109 | 0.5223 | 0.7118 | 0.8437 |
No log | 30.0 | 60 | 0.8405 | 0.4449 | 0.8412 | 0.9172 |
No log | 31.0 | 62 | 0.7391 | 0.5287 | 0.7399 | 0.8602 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for genki10/ASAP_FineTuningBERT_AugV8_k1_task1_organization_k1_fold3
Base model
google-bert/bert-base-uncased