distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.3872
- Accuracy: 0.6802
- F1: 0.6793
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
No log | 1.0 | 48 | 1.7238 | 0.6762 | 0.6813 |
0.0305 | 2.0 | 96 | 1.8028 | 0.6775 | 0.6755 |
0.0305 | 3.0 | 144 | 1.9018 | 0.6689 | 0.6668 |
0.0257 | 4.0 | 192 | 1.9426 | 0.6735 | 0.6740 |
0.0257 | 5.0 | 240 | 1.9829 | 0.6662 | 0.6670 |
0.0207 | 6.0 | 288 | 1.9462 | 0.6722 | 0.6753 |
0.0207 | 7.0 | 336 | 1.9573 | 0.6861 | 0.6851 |
0.0185 | 8.0 | 384 | 2.0147 | 0.6808 | 0.6820 |
0.0185 | 9.0 | 432 | 2.0982 | 0.6669 | 0.6649 |
0.0172 | 10.0 | 480 | 2.0431 | 0.6815 | 0.6799 |
0.0172 | 11.0 | 528 | 2.0935 | 0.6768 | 0.6751 |
0.0182 | 12.0 | 576 | 2.0599 | 0.6868 | 0.6835 |
0.0182 | 13.0 | 624 | 2.0953 | 0.6808 | 0.6812 |
0.0148 | 14.0 | 672 | 2.1115 | 0.6788 | 0.6790 |
0.0148 | 15.0 | 720 | 2.1529 | 0.6735 | 0.6765 |
0.0171 | 16.0 | 768 | 2.1873 | 0.6702 | 0.6720 |
0.0171 | 17.0 | 816 | 2.1534 | 0.6782 | 0.6793 |
0.0142 | 18.0 | 864 | 2.1803 | 0.6782 | 0.6773 |
0.0142 | 19.0 | 912 | 2.2252 | 0.6802 | 0.6801 |
0.0168 | 20.0 | 960 | 2.2221 | 0.6749 | 0.6764 |
0.0168 | 21.0 | 1008 | 2.2365 | 0.6821 | 0.6817 |
0.015 | 22.0 | 1056 | 2.2812 | 0.6742 | 0.6728 |
0.015 | 23.0 | 1104 | 2.2447 | 0.6729 | 0.6707 |
0.0145 | 24.0 | 1152 | 2.3272 | 0.6709 | 0.6700 |
0.0145 | 25.0 | 1200 | 2.2630 | 0.6788 | 0.6809 |
0.0151 | 26.0 | 1248 | 2.2751 | 0.6808 | 0.6811 |
0.0151 | 27.0 | 1296 | 2.3018 | 0.6768 | 0.6776 |
0.0144 | 28.0 | 1344 | 2.3544 | 0.6676 | 0.6681 |
0.0144 | 29.0 | 1392 | 2.3109 | 0.6821 | 0.6828 |
0.0126 | 30.0 | 1440 | 2.3234 | 0.6795 | 0.6786 |
0.0126 | 31.0 | 1488 | 2.3294 | 0.6755 | 0.6750 |
0.0142 | 32.0 | 1536 | 2.3183 | 0.6875 | 0.6886 |
0.0142 | 33.0 | 1584 | 2.2949 | 0.6808 | 0.6823 |
0.0131 | 34.0 | 1632 | 2.3451 | 0.6788 | 0.6773 |
0.0131 | 35.0 | 1680 | 2.3160 | 0.6828 | 0.6841 |
0.0143 | 36.0 | 1728 | 2.3251 | 0.6828 | 0.6815 |
0.0143 | 37.0 | 1776 | 2.4003 | 0.6762 | 0.6753 |
0.0116 | 38.0 | 1824 | 2.3675 | 0.6775 | 0.6770 |
0.0116 | 39.0 | 1872 | 2.3700 | 0.6749 | 0.6735 |
0.0126 | 40.0 | 1920 | 2.3700 | 0.6841 | 0.6831 |
0.0126 | 41.0 | 1968 | 2.3818 | 0.6795 | 0.6793 |
0.0115 | 42.0 | 2016 | 2.3518 | 0.6815 | 0.6814 |
0.0115 | 43.0 | 2064 | 2.3829 | 0.6802 | 0.6790 |
0.0135 | 44.0 | 2112 | 2.3638 | 0.6782 | 0.6775 |
0.0135 | 45.0 | 2160 | 2.3568 | 0.6775 | 0.6768 |
0.0146 | 46.0 | 2208 | 2.3633 | 0.6788 | 0.6784 |
0.0118 | 47.0 | 2256 | 2.3725 | 0.6788 | 0.6782 |
0.0118 | 48.0 | 2304 | 2.3875 | 0.6815 | 0.6806 |
0.0116 | 49.0 | 2352 | 2.3862 | 0.6795 | 0.6787 |
0.0116 | 50.0 | 2400 | 2.3872 | 0.6802 | 0.6793 |
Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for dalopeza98/distilbert-base-uncased-finetuned-emotion
Base model
distilbert/distilbert-base-uncased