lilt-en-funsd / README.md
chintans's picture
End of training
51975dc verified
|
raw
history blame
7.9 kB
metadata
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
  - generated_from_trainer
model-index:
  - name: lilt-en-funsd
    results: []

lilt-en-funsd

This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6046
  • Answer: {'precision': 0.8879518072289156, 'recall': 0.9020807833537332, 'f1': 0.8949605343047966, 'number': 817}
  • Header: {'precision': 0.6761904761904762, 'recall': 0.5966386554621849, 'f1': 0.6339285714285715, 'number': 119}
  • Question: {'precision': 0.8968609865470852, 'recall': 0.9285051067780873, 'f1': 0.9124087591240877, 'number': 1077}
  • Overall Precision: 0.8820
  • Overall Recall: 0.8982
  • Overall F1: 0.8900
  • Overall Accuracy: 0.8187

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
0.3984 10.5263 200 0.9319 {'precision': 0.837129840546697, 'recall': 0.8996328029375765, 'f1': 0.8672566371681416, 'number': 817} {'precision': 0.5238095238095238, 'recall': 0.46218487394957986, 'f1': 0.4910714285714286, 'number': 119} {'precision': 0.8606060606060606, 'recall': 0.9229340761374187, 'f1': 0.8906810035842293, 'number': 1077} 0.8344 0.8862 0.8596 0.7926
0.0543 21.0526 400 1.1018 {'precision': 0.8492520138089759, 'recall': 0.9033047735618115, 'f1': 0.8754448398576511, 'number': 817} {'precision': 0.5688073394495413, 'recall': 0.5210084033613446, 'f1': 0.543859649122807, 'number': 119} {'precision': 0.8731277533039647, 'recall': 0.9201485608170845, 'f1': 0.8960216998191681, 'number': 1077} 0.8476 0.8897 0.8682 0.8133
0.0134 31.5789 600 1.5231 {'precision': 0.838963963963964, 'recall': 0.9118727050183598, 'f1': 0.873900293255132, 'number': 817} {'precision': 0.5727272727272728, 'recall': 0.5294117647058824, 'f1': 0.5502183406113538, 'number': 119} {'precision': 0.9149338374291115, 'recall': 0.8987929433611885, 'f1': 0.9067915690866512, 'number': 1077} 0.8638 0.8823 0.8729 0.8046
0.0101 42.1053 800 1.5678 {'precision': 0.8509895227008148, 'recall': 0.8947368421052632, 'f1': 0.8723150357995225, 'number': 817} {'precision': 0.5700934579439252, 'recall': 0.5126050420168067, 'f1': 0.5398230088495575, 'number': 119} {'precision': 0.8781770376862401, 'recall': 0.9303621169916435, 'f1': 0.9035166816952209, 'number': 1077} 0.8514 0.8912 0.8709 0.8052
0.0041 52.6316 1000 1.6538 {'precision': 0.8399558498896247, 'recall': 0.9314565483476133, 'f1': 0.8833430063842136, 'number': 817} {'precision': 0.6705882352941176, 'recall': 0.4789915966386555, 'f1': 0.5588235294117647, 'number': 119} {'precision': 0.9063386944181646, 'recall': 0.8895078922934077, 'f1': 0.8978444236176195, 'number': 1077} 0.8672 0.8823 0.8747 0.7979
0.0033 63.1579 1200 1.4464 {'precision': 0.875, 'recall': 0.9167686658506732, 'f1': 0.895397489539749, 'number': 817} {'precision': 0.6049382716049383, 'recall': 0.4117647058823529, 'f1': 0.49000000000000005, 'number': 119} {'precision': 0.8777292576419214, 'recall': 0.9331476323119777, 'f1': 0.9045904590459046, 'number': 1077} 0.8660 0.8957 0.8806 0.8152
0.0015 73.6842 1400 1.5128 {'precision': 0.8679906542056075, 'recall': 0.9094247246022031, 'f1': 0.8882247459653317, 'number': 817} {'precision': 0.6511627906976745, 'recall': 0.47058823529411764, 'f1': 0.5463414634146342, 'number': 119} {'precision': 0.8906810035842294, 'recall': 0.9229340761374187, 'f1': 0.9065207478340173, 'number': 1077} 0.8712 0.8907 0.8809 0.8213
0.0014 84.2105 1600 1.6089 {'precision': 0.8555176336746303, 'recall': 0.9204406364749081, 'f1': 0.8867924528301887, 'number': 817} {'precision': 0.6074766355140186, 'recall': 0.5462184873949579, 'f1': 0.575221238938053, 'number': 119} {'precision': 0.8891820580474934, 'recall': 0.9387186629526463, 'f1': 0.913279132791328, 'number': 1077} 0.8610 0.9081 0.8839 0.8172
0.0005 94.7368 1800 1.6500 {'precision': 0.865967365967366, 'recall': 0.9094247246022031, 'f1': 0.8871641791044775, 'number': 817} {'precision': 0.6513761467889908, 'recall': 0.5966386554621849, 'f1': 0.6228070175438596, 'number': 119} {'precision': 0.9027522935779817, 'recall': 0.9136490250696379, 'f1': 0.9081679741578219, 'number': 1077} 0.8741 0.8932 0.8835 0.8135
0.0005 105.2632 2000 1.6204 {'precision': 0.8909090909090909, 'recall': 0.8996328029375765, 'f1': 0.8952496954933008, 'number': 817} {'precision': 0.6403508771929824, 'recall': 0.6134453781512605, 'f1': 0.6266094420600858, 'number': 119} {'precision': 0.8925399644760214, 'recall': 0.9331476323119777, 'f1': 0.9123921924648207, 'number': 1077} 0.8780 0.9006 0.8892 0.8177
0.0004 115.7895 2200 1.6046 {'precision': 0.8879518072289156, 'recall': 0.9020807833537332, 'f1': 0.8949605343047966, 'number': 817} {'precision': 0.6761904761904762, 'recall': 0.5966386554621849, 'f1': 0.6339285714285715, 'number': 119} {'precision': 0.8968609865470852, 'recall': 0.9285051067780873, 'f1': 0.9124087591240877, 'number': 1077} 0.8820 0.8982 0.8900 0.8187
0.0002 126.3158 2400 1.6270 {'precision': 0.8790035587188612, 'recall': 0.9069767441860465, 'f1': 0.8927710843373493, 'number': 817} {'precision': 0.6574074074074074, 'recall': 0.5966386554621849, 'f1': 0.6255506607929515, 'number': 119} {'precision': 0.8972046889089269, 'recall': 0.9238625812441968, 'f1': 0.9103385178408051, 'number': 1077} 0.8772 0.8977 0.8873 0.8193

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1