Edit model card

018-microsoft-MiniLM-finetuned-yahoo-8000_2000

This model is a fine-tuned version of microsoft/MiniLM-L12-H384-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0511
  • F1: 0.6984
  • Accuracy: 0.701
  • Precision: 0.7000
  • Recall: 0.701
  • System Ram Used: 4.0180
  • System Ram Total: 83.4807
  • Gpu Ram Allocated: 0.3995
  • Gpu Ram Cached: 12.9297
  • Gpu Ram Total: 39.5640
  • Gpu Utilization: 35
  • Disk Space Used: 26.2045
  • Disk Space Total: 78.1898

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss F1 Accuracy Precision Recall System Ram Used System Ram Total Gpu Ram Allocated Gpu Ram Cached Gpu Ram Total Gpu Utilization Disk Space Used Disk Space Total
2.1461 0.5 125 1.8487 0.4711 0.5465 0.5181 0.5465 3.8798 83.4807 0.3996 12.9297 39.5640 28 24.5841 78.1898
1.6793 1.0 250 1.5280 0.5799 0.615 0.6207 0.615 3.8827 83.4807 0.3996 12.9297 39.5640 28 24.5842 78.1898
1.4163 1.5 375 1.3396 0.6508 0.6675 0.6691 0.6675 3.8831 83.4807 0.3996 12.9297 39.5640 28 24.5842 78.1898
1.2855 2.0 500 1.2413 0.6633 0.6745 0.6742 0.6745 3.8975 83.4807 0.3996 12.9297 39.5640 30 24.5843 78.1898
1.1364 2.5 625 1.1795 0.6658 0.6725 0.6758 0.6725 4.0967 83.4807 0.3996 12.9297 39.5640 31 25.4571 78.1898
1.0569 3.0 750 1.1167 0.6785 0.6845 0.6841 0.6845 4.0923 83.4807 0.3996 12.9297 39.5640 29 25.4573 78.1898
0.9596 3.5 875 1.0866 0.6883 0.698 0.6920 0.698 3.8765 83.4807 0.3997 12.9297 39.5640 29 25.4573 78.1898
0.917 4.0 1000 1.0703 0.6796 0.6875 0.6841 0.6875 3.8976 83.4807 0.3996 12.9297 39.5640 29 25.4573 78.1898
0.8512 4.5 1125 1.0629 0.6913 0.6915 0.6945 0.6915 4.0600 83.4807 0.3997 12.9297 39.5640 28 25.8306 78.1898
0.8121 5.0 1250 1.0576 0.6838 0.691 0.6905 0.691 4.0432 83.4807 0.3996 12.9297 39.5640 31 25.8306 78.1898
0.7733 5.5 1375 1.0598 0.6774 0.6805 0.6838 0.6805 3.8379 83.4807 0.3996 12.9297 39.5640 28 25.8307 78.1898
0.7431 6.0 1500 1.0376 0.6974 0.702 0.6976 0.702 3.8546 83.4807 0.3996 12.9297 39.5640 31 25.8307 78.1898
0.7065 6.5 1625 1.0457 0.6990 0.6995 0.7014 0.6995 4.0339 83.4807 0.3996 12.9297 39.5640 28 26.2040 78.1898
0.671 7.0 1750 1.0396 0.6956 0.698 0.6966 0.698 4.0384 83.4807 0.3996 12.9297 39.5640 28 26.2040 78.1898
0.6438 7.5 1875 1.0474 0.6887 0.6925 0.6907 0.6925 3.8274 83.4807 0.3996 12.9297 39.5640 28 26.2040 78.1898
0.6326 8.0 2000 1.0384 0.6972 0.698 0.6983 0.698 3.8402 83.4807 0.3996 12.9297 39.5640 34 26.2041 78.1898
0.6121 8.5 2125 1.0440 0.6963 0.698 0.6976 0.698 4.0162 83.4807 0.3996 12.9297 39.5640 29 26.2042 78.1898
0.5911 9.0 2250 1.0518 0.6995 0.701 0.7006 0.701 4.0338 83.4807 0.3996 12.9297 39.5640 28 26.2043 78.1898
0.592 9.5 2375 1.0490 0.7023 0.7035 0.7025 0.7035 3.8126 83.4807 0.3996 12.9297 39.5640 27 26.2043 78.1898
0.5586 10.0 2500 1.0511 0.6984 0.701 0.7000 0.701 3.8448 83.4807 0.3996 12.9297 39.5640 27 26.2043 78.1898

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for diogopaes10/018-microsoft-MiniLM-finetuned-yahoo-8000_2000

Finetuned
(29)
this model