lilt-en-funsd / README.md
chintans's picture
End of training
892ec0c verified
|
raw
history blame
7.9 kB
metadata
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
  - generated_from_trainer
model-index:
  - name: lilt-en-funsd
    results: []

lilt-en-funsd

This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6681
  • Answer: {'precision': 0.8778173190984578, 'recall': 0.9057527539779682, 'f1': 0.891566265060241, 'number': 817}
  • Header: {'precision': 0.6036036036036037, 'recall': 0.5630252100840336, 'f1': 0.582608695652174, 'number': 119}
  • Question: {'precision': 0.9024839006439742, 'recall': 0.9108635097493036, 'f1': 0.9066543438077633, 'number': 1077}
  • Overall Precision: 0.8760
  • Overall Recall: 0.8882
  • Overall F1: 0.8821
  • Overall Accuracy: 0.8030

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
0.431 10.5263 200 1.0598 {'precision': 0.8017718715393134, 'recall': 0.8861689106487148, 'f1': 0.841860465116279, 'number': 817} {'precision': 0.4228187919463087, 'recall': 0.5294117647058824, 'f1': 0.47014925373134325, 'number': 119} {'precision': 0.8755935422602089, 'recall': 0.8560817084493965, 'f1': 0.8657276995305164, 'number': 1077} 0.8119 0.8490 0.8300 0.7774
0.0542 21.0526 400 1.2173 {'precision': 0.8382352941176471, 'recall': 0.9069767441860465, 'f1': 0.8712522045855379, 'number': 817} {'precision': 0.5350877192982456, 'recall': 0.5126050420168067, 'f1': 0.5236051502145922, 'number': 119} {'precision': 0.8882521489971347, 'recall': 0.8635097493036211, 'f1': 0.8757062146892656, 'number': 1077} 0.8469 0.8604 0.8536 0.8016
0.014 31.5789 600 1.2955 {'precision': 0.8415051311288484, 'recall': 0.9033047735618115, 'f1': 0.8713105076741442, 'number': 817} {'precision': 0.6210526315789474, 'recall': 0.4957983193277311, 'f1': 0.5514018691588785, 'number': 119} {'precision': 0.8972477064220183, 'recall': 0.9080779944289693, 'f1': 0.9026303645592985, 'number': 1077} 0.8608 0.8818 0.8712 0.8160
0.0064 42.1053 800 1.2848 {'precision': 0.8696186961869619, 'recall': 0.8653610771113831, 'f1': 0.8674846625766871, 'number': 817} {'precision': 0.5193798449612403, 'recall': 0.5630252100840336, 'f1': 0.5403225806451614, 'number': 119} {'precision': 0.858274647887324, 'recall': 0.9052924791086351, 'f1': 0.8811568007230005, 'number': 1077} 0.8417 0.8689 0.8550 0.8222
0.0037 52.6316 1000 1.5983 {'precision': 0.8530751708428246, 'recall': 0.9167686658506732, 'f1': 0.8837758112094395, 'number': 817} {'precision': 0.5658914728682171, 'recall': 0.6134453781512605, 'f1': 0.5887096774193549, 'number': 119} {'precision': 0.8946360153256705, 'recall': 0.8672237697307336, 'f1': 0.8807166430928807, 'number': 1077} 0.8562 0.8723 0.8642 0.7916
0.0034 63.1579 1200 1.5936 {'precision': 0.85, 'recall': 0.9155446756425949, 'f1': 0.881555686505598, 'number': 817} {'precision': 0.5619047619047619, 'recall': 0.4957983193277311, 'f1': 0.5267857142857143, 'number': 119} {'precision': 0.8912442396313364, 'recall': 0.8978644382544104, 'f1': 0.8945420906567992, 'number': 1077} 0.8570 0.8813 0.8690 0.8102
0.0021 73.6842 1400 1.4765 {'precision': 0.8558139534883721, 'recall': 0.9008567931456548, 'f1': 0.877757901013715, 'number': 817} {'precision': 0.5619047619047619, 'recall': 0.4957983193277311, 'f1': 0.5267857142857143, 'number': 119} {'precision': 0.885036496350365, 'recall': 0.9006499535747446, 'f1': 0.892774965485504, 'number': 1077} 0.8564 0.8768 0.8665 0.8010
0.0009 84.2105 1600 1.6681 {'precision': 0.8778173190984578, 'recall': 0.9057527539779682, 'f1': 0.891566265060241, 'number': 817} {'precision': 0.6036036036036037, 'recall': 0.5630252100840336, 'f1': 0.582608695652174, 'number': 119} {'precision': 0.9024839006439742, 'recall': 0.9108635097493036, 'f1': 0.9066543438077633, 'number': 1077} 0.8760 0.8882 0.8821 0.8030
0.0003 94.7368 1800 1.6379 {'precision': 0.8595238095238096, 'recall': 0.8837209302325582, 'f1': 0.8714544357272178, 'number': 817} {'precision': 0.5929203539823009, 'recall': 0.5630252100840336, 'f1': 0.5775862068965517, 'number': 119} {'precision': 0.896709323583181, 'recall': 0.9108635097493036, 'f1': 0.9037309995393827, 'number': 1077} 0.8647 0.8793 0.8719 0.7986
0.0002 105.2632 2000 1.7186 {'precision': 0.8644859813084113, 'recall': 0.9057527539779682, 'f1': 0.8846383741781233, 'number': 817} {'precision': 0.5675675675675675, 'recall': 0.5294117647058824, 'f1': 0.5478260869565218, 'number': 119} {'precision': 0.8921658986175115, 'recall': 0.8987929433611885, 'f1': 0.8954671600370029, 'number': 1077} 0.8631 0.8798 0.8713 0.7978
0.0003 115.7895 2200 1.6765 {'precision': 0.8690476190476191, 'recall': 0.8935128518971848, 'f1': 0.8811104405552203, 'number': 817} {'precision': 0.5726495726495726, 'recall': 0.5630252100840336, 'f1': 0.5677966101694915, 'number': 119} {'precision': 0.8934802571166207, 'recall': 0.903435468895079, 'f1': 0.8984302862419206, 'number': 1077} 0.8651 0.8793 0.8721 0.8000
0.0003 126.3158 2400 1.7309 {'precision': 0.8817852834740652, 'recall': 0.8947368421052632, 'f1': 0.888213851761847, 'number': 817} {'precision': 0.5675675675675675, 'recall': 0.5294117647058824, 'f1': 0.5478260869565218, 'number': 119} {'precision': 0.8914233576642335, 'recall': 0.9071494893221913, 'f1': 0.8992176714219972, 'number': 1077} 0.8698 0.8798 0.8748 0.7959

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1