m-minilm-l12-h384-dra-tam-mal-meso-meme-detection

This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6007
  • Accuracy: 0.7117
  • F1: 0.5328

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.6781 0.1429 2 0.6605 0.6914 0.0
0.6443 0.2857 4 0.6301 0.6914 0.0
0.6228 0.4286 6 0.6174 0.6914 0.0
0.6402 0.5714 8 0.6174 0.6914 0.0
0.6163 0.7143 10 0.6154 0.6914 0.0
0.6341 0.8571 12 0.6053 0.6914 0.0
0.5962 1.0 14 0.5972 0.6914 0.0
0.5522 1.1429 16 0.5845 0.7027 0.0704
0.577 1.2857 18 0.5854 0.7050 0.4696
0.5719 1.4286 20 0.5862 0.7117 0.4711
0.5876 1.5714 22 0.6007 0.7005 0.4981
0.5926 1.7143 24 0.5921 0.7117 0.4286
0.5405 1.8571 26 0.6141 0.6937 0.4963
0.4942 2.0 28 0.6078 0.7050 0.4609
0.5655 2.1429 30 0.6009 0.7140 0.4253
0.5051 2.2857 32 0.6054 0.7005 0.4242
0.5001 2.4286 34 0.6251 0.6892 0.4889
0.577 2.5714 36 0.6136 0.6982 0.4766
0.5034 2.7143 38 0.5884 0.7252 0.4352
0.5306 2.8571 40 0.5927 0.7252 0.3370
0.5255 3.0 42 0.5869 0.7275 0.3920
0.4786 3.1429 44 0.6012 0.7095 0.4901
0.5422 3.2857 46 0.6149 0.6914 0.4982
0.5278 3.4286 48 0.5914 0.7117 0.4797
0.4985 3.5714 50 0.5752 0.7297 0.4340
0.5262 3.7143 52 0.5707 0.7365 0.4658
0.522 3.8571 54 0.5797 0.7027 0.45
0.528 4.0 56 0.5775 0.7095 0.4603
0.4883 4.1429 58 0.5714 0.7185 0.4589
0.4504 4.2857 60 0.5754 0.7275 0.4265
0.5175 4.4286 62 0.5786 0.7252 0.4135
0.4654 4.5714 64 0.5740 0.7252 0.4299
0.5111 4.7143 66 0.5812 0.7140 0.4641
0.4264 4.8571 68 0.5962 0.7095 0.5169
0.465 5.0 70 0.6007 0.7117 0.5328
0.464 5.1429 72 0.6002 0.7117 0.4961
0.3746 5.2857 74 0.5943 0.7207 0.4918
0.4431 5.4286 76 0.5853 0.7342 0.5
0.4365 5.5714 78 0.5741 0.7365 0.5021
0.4253 5.7143 80 0.5698 0.7387 0.5167
0.4767 5.8571 82 0.5682 0.7432 0.5328
0.4225 6.0 84 0.5676 0.7387 0.5285

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.20.3
Downloads last month
169
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for livinNector/m-minilm-l12-h384-dra-tam-mal-meso-meme-detection

Finetuned
(23)
this model