bert-covid-10
This model is a fine-tuned version of hung200504/bert-squadv2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.7771
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
4.5591 | 0.03 | 5 | 1.2013 |
1.26 | 0.06 | 10 | 1.4395 |
1.0119 | 0.08 | 15 | 1.0122 |
1.2415 | 0.11 | 20 | 0.9416 |
1.2023 | 0.14 | 25 | 0.9744 |
0.6522 | 0.17 | 30 | 1.0264 |
0.7933 | 0.2 | 35 | 0.9108 |
0.8563 | 0.22 | 40 | 0.8834 |
1.7054 | 0.25 | 45 | 0.7498 |
1.2612 | 0.28 | 50 | 0.9725 |
1.5539 | 0.31 | 55 | 0.7606 |
0.9714 | 0.34 | 60 | 0.7498 |
0.9315 | 0.37 | 65 | 0.8180 |
0.785 | 0.39 | 70 | 0.7489 |
0.8412 | 0.42 | 75 | 0.7245 |
0.771 | 0.45 | 80 | 0.7001 |
0.9954 | 0.48 | 85 | 0.7978 |
0.8955 | 0.51 | 90 | 0.7512 |
0.5199 | 0.53 | 95 | 0.7987 |
0.8306 | 0.56 | 100 | 0.7427 |
1.674 | 0.59 | 105 | 0.7486 |
1.07 | 0.62 | 110 | 0.7545 |
1.1531 | 0.65 | 115 | 0.7376 |
0.4512 | 0.67 | 120 | 0.7090 |
1.2562 | 0.7 | 125 | 0.8047 |
0.3477 | 0.73 | 130 | 0.8520 |
1.2624 | 0.76 | 135 | 0.8251 |
0.9207 | 0.79 | 140 | 0.9866 |
0.8576 | 0.81 | 145 | 0.8059 |
0.9542 | 0.84 | 150 | 0.7819 |
0.566 | 0.87 | 155 | 0.7930 |
0.5193 | 0.9 | 160 | 0.7936 |
1.1654 | 0.93 | 165 | 0.7043 |
0.6106 | 0.96 | 170 | 0.7801 |
1.0075 | 0.98 | 175 | 0.8119 |
0.3914 | 1.01 | 180 | 0.6728 |
0.582 | 1.04 | 185 | 0.7447 |
0.5797 | 1.07 | 190 | 0.7109 |
0.2599 | 1.1 | 195 | 0.7113 |
0.6996 | 1.12 | 200 | 0.7092 |
0.6947 | 1.15 | 205 | 0.6919 |
0.9602 | 1.18 | 210 | 0.6917 |
0.3383 | 1.21 | 215 | 0.7037 |
0.2811 | 1.24 | 220 | 0.6921 |
0.5086 | 1.26 | 225 | 0.7445 |
0.6181 | 1.29 | 230 | 0.7626 |
0.5704 | 1.32 | 235 | 0.7376 |
0.4165 | 1.35 | 240 | 0.7283 |
0.6875 | 1.38 | 245 | 0.7215 |
0.3372 | 1.4 | 250 | 0.7111 |
0.8581 | 1.43 | 255 | 0.7325 |
0.2464 | 1.46 | 260 | 0.7388 |
0.4273 | 1.49 | 265 | 0.7421 |
0.5893 | 1.52 | 270 | 0.7215 |
0.3417 | 1.54 | 275 | 0.7113 |
0.3248 | 1.57 | 280 | 0.7255 |
0.3868 | 1.6 | 285 | 0.7591 |
0.6292 | 1.63 | 290 | 0.7761 |
0.8929 | 1.66 | 295 | 0.7377 |
0.5528 | 1.69 | 300 | 0.7600 |
0.7983 | 1.71 | 305 | 0.7501 |
0.5109 | 1.74 | 310 | 0.7427 |
0.2947 | 1.77 | 315 | 0.7341 |
0.735 | 1.8 | 320 | 0.7268 |
0.4768 | 1.83 | 325 | 0.7358 |
0.5174 | 1.85 | 330 | 0.7587 |
0.7559 | 1.88 | 335 | 0.7637 |
0.7588 | 1.91 | 340 | 0.8034 |
0.6151 | 1.94 | 345 | 0.7513 |
0.6112 | 1.97 | 350 | 0.7014 |
0.9156 | 1.99 | 355 | 0.6862 |
0.6369 | 2.02 | 360 | 0.6850 |
0.5036 | 2.05 | 365 | 0.7085 |
0.2256 | 2.08 | 370 | 0.7550 |
0.2673 | 2.11 | 375 | 0.7604 |
0.3033 | 2.13 | 380 | 0.7795 |
0.496 | 2.16 | 385 | 0.7891 |
0.3478 | 2.19 | 390 | 0.7892 |
0.5106 | 2.22 | 395 | 0.7879 |
0.1652 | 2.25 | 400 | 0.7844 |
0.3427 | 2.28 | 405 | 0.7969 |
0.4543 | 2.3 | 410 | 0.8061 |
0.3494 | 2.33 | 415 | 0.8045 |
0.4218 | 2.36 | 420 | 0.7992 |
0.7607 | 2.39 | 425 | 0.7786 |
0.5569 | 2.42 | 430 | 0.7579 |
0.1897 | 2.44 | 435 | 0.7475 |
0.292 | 2.47 | 440 | 0.7457 |
0.3637 | 2.5 | 445 | 0.7530 |
0.2565 | 2.53 | 450 | 0.7574 |
0.2058 | 2.56 | 455 | 0.7601 |
0.2844 | 2.58 | 460 | 0.7562 |
0.7811 | 2.61 | 465 | 0.7556 |
0.4162 | 2.64 | 470 | 0.7603 |
0.4668 | 2.67 | 475 | 0.7696 |
0.2115 | 2.7 | 480 | 0.7681 |
0.3403 | 2.72 | 485 | 0.7623 |
0.0648 | 2.75 | 490 | 0.7618 |
0.789 | 2.78 | 495 | 0.7654 |
0.3259 | 2.81 | 500 | 0.7690 |
0.4558 | 2.84 | 505 | 0.7713 |
0.4416 | 2.87 | 510 | 0.7708 |
0.0154 | 2.89 | 515 | 0.7714 |
0.0503 | 2.92 | 520 | 0.7730 |
0.3909 | 2.95 | 525 | 0.7750 |
0.1983 | 2.98 | 530 | 0.7771 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for hung200504/bert-covid-10
Finetuned
hung200504/bert-squadv2