Edit model card

scenario-non-kd-scr-ner-full-xlmr_data-univner_half44

This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5850
  • Precision: 0.3090
  • Recall: 0.4141
  • F1: 0.3539
  • Accuracy: 0.9219

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 44
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.3726 0.5828 500 0.3180 0.3185 0.1437 0.1981 0.9252
0.2842 1.1655 1000 0.3582 0.1651 0.2194 0.1884 0.9023
0.2484 1.7483 1500 0.3015 0.2876 0.1958 0.2330 0.9274
0.2164 2.3310 2000 0.3746 0.1573 0.2725 0.1995 0.8904
0.2037 2.9138 2500 0.3387 0.1816 0.2516 0.2110 0.9029
0.1786 3.4965 3000 0.3330 0.2181 0.2813 0.2457 0.9115
0.1563 4.0793 3500 0.3990 0.1694 0.3402 0.2262 0.8842
0.1285 4.6620 4000 0.3549 0.2149 0.3561 0.2680 0.9025
0.1123 5.2448 4500 0.3587 0.2343 0.3646 0.2852 0.9051
0.0914 5.8275 5000 0.3688 0.2347 0.3747 0.2886 0.9041
0.0742 6.4103 5500 0.3943 0.2424 0.3805 0.2961 0.9068
0.0692 6.9930 6000 0.3877 0.2513 0.3896 0.3055 0.9102
0.0538 7.5758 6500 0.3864 0.2837 0.3865 0.3272 0.9199
0.0463 8.1585 7000 0.4363 0.2537 0.4067 0.3125 0.9082
0.0395 8.7413 7500 0.4260 0.2618 0.4046 0.3179 0.9118
0.0342 9.3240 8000 0.4661 0.2441 0.4050 0.3046 0.9081
0.0309 9.9068 8500 0.4307 0.2850 0.3963 0.3315 0.9195
0.0246 10.4895 9000 0.4781 0.2541 0.4157 0.3154 0.9080
0.0223 11.0723 9500 0.4891 0.2721 0.4035 0.3250 0.9129
0.0182 11.6550 10000 0.4741 0.2897 0.3969 0.3349 0.9190
0.0172 12.2378 10500 0.4542 0.3522 0.3968 0.3732 0.9320
0.0155 12.8205 11000 0.4927 0.2961 0.3885 0.3361 0.9220
0.0127 13.4033 11500 0.4922 0.2991 0.4096 0.3458 0.9196
0.0118 13.9860 12000 0.5092 0.2809 0.4193 0.3364 0.9161
0.0096 14.5688 12500 0.4991 0.3093 0.4059 0.3511 0.9246
0.0095 15.1515 13000 0.5892 0.2645 0.4292 0.3273 0.9095
0.0082 15.7343 13500 0.5076 0.3108 0.4054 0.3519 0.9230
0.007 16.3170 14000 0.5248 0.3094 0.4147 0.3544 0.9228
0.0069 16.8998 14500 0.5352 0.2932 0.4210 0.3457 0.9183
0.006 17.4825 15000 0.5458 0.3034 0.4092 0.3485 0.9205
0.0056 18.0653 15500 0.5786 0.2857 0.4366 0.3454 0.9150
0.004 18.6480 16000 0.5608 0.3048 0.4168 0.3521 0.9214
0.0047 19.2308 16500 0.5394 0.3339 0.4007 0.3643 0.9263
0.0041 19.8135 17000 0.5308 0.3333 0.4085 0.3671 0.9279
0.0037 20.3963 17500 0.5628 0.3081 0.4184 0.3548 0.9207
0.0035 20.9790 18000 0.5735 0.3115 0.4144 0.3556 0.9219
0.0026 21.5618 18500 0.5682 0.3123 0.4109 0.3549 0.9225
0.0027 22.1445 19000 0.5672 0.3264 0.4108 0.3637 0.9255
0.0022 22.7273 19500 0.6179 0.2866 0.4356 0.3457 0.9147
0.0025 23.3100 20000 0.5594 0.3419 0.4004 0.3688 0.9288
0.0021 23.8928 20500 0.5850 0.3090 0.4141 0.3539 0.9219

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
235M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for haryoaw/scenario-non-kd-scr-ner-full-xlmr_data-univner_half44

Finetuned
(2362)
this model