ASAP_FineTuningBERT_AugV6_k1_task1_organization_k1_k1_fold3

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1400
  • Qwk: 0.3435
  • Mse: 1.1407
  • Rmse: 1.0680

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 2 13.0565 -0.0008 13.0545 3.6131
No log 2.0 4 8.8406 0.0 8.8389 2.9730
No log 3.0 6 6.7196 0.0 6.7180 2.5919
No log 4.0 8 6.5079 0.0 6.5063 2.5508
No log 5.0 10 6.3578 0.0 6.3563 2.5212
No log 6.0 12 6.1700 -0.0025 6.1685 2.4837
No log 7.0 14 5.8542 0.0 5.8528 2.4193
No log 8.0 16 5.3036 0.0 5.3023 2.3027
No log 9.0 18 4.6777 0.0 4.6764 2.1625
No log 10.0 20 4.0707 0.0 4.0695 2.0173
No log 11.0 22 3.6237 0.0 3.6226 1.9033
No log 12.0 24 3.1770 0.0 3.1760 1.7821
No log 13.0 26 2.5483 0.0502 2.5475 1.5961
No log 14.0 28 2.0388 0.0467 2.0379 1.4276
No log 15.0 30 1.8301 0.0834 1.8294 1.3525
No log 16.0 32 1.6908 0.0935 1.6902 1.3001
No log 17.0 34 1.1983 0.0355 1.1977 1.0944
No log 18.0 36 0.9742 0.0417 0.9737 0.9868
No log 19.0 38 0.9388 0.1501 0.9386 0.9688
No log 20.0 40 1.6981 0.1820 1.6976 1.3029
No log 21.0 42 1.4947 0.1864 1.4944 1.2224
No log 22.0 44 1.3656 0.1891 1.3655 1.1685
No log 23.0 46 2.1279 0.1492 2.1275 1.4586
No log 24.0 48 1.4069 0.2392 1.4070 1.1862
No log 25.0 50 0.7704 0.4551 0.7707 0.8779
No log 26.0 52 0.6943 0.5013 0.6944 0.8333
No log 27.0 54 0.7822 0.4244 0.7824 0.8845
No log 28.0 56 1.5110 0.2183 1.5110 1.2292
No log 29.0 58 2.7385 0.0593 2.7380 1.6547
No log 30.0 60 2.2825 0.1170 2.2823 1.5107
No log 31.0 62 1.0774 0.3487 1.0780 1.0383
No log 32.0 64 0.7726 0.4629 0.7732 0.8793
No log 33.0 66 1.0782 0.3837 1.0790 1.0388
No log 34.0 68 2.6649 0.0752 2.6650 1.6325
No log 35.0 70 3.2031 0.0376 3.2030 1.7897
No log 36.0 72 2.6177 0.0731 2.6180 1.6180
No log 37.0 74 1.4181 0.2922 1.4191 1.1913
No log 38.0 76 1.2087 0.3130 1.2098 1.0999
No log 39.0 78 1.8184 0.2183 1.8191 1.3487
No log 40.0 80 2.7934 0.0817 2.7931 1.6713
No log 41.0 82 2.8437 0.0819 2.8434 1.6862
No log 42.0 84 2.2865 0.1484 2.2871 1.5123
No log 43.0 86 1.7042 0.2239 1.7049 1.3057
No log 44.0 88 1.5346 0.2505 1.5355 1.2391
No log 45.0 90 1.5897 0.2319 1.5913 1.2615
No log 46.0 92 1.1400 0.3435 1.1407 1.0680

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for genki10/ASAP_FineTuningBERT_AugV6_k1_task1_organization_k1_k1_fold3

Finetuned
(2658)
this model