File size: 7,875 Bytes
f1fa33a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lilt-en-funsd

This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9565
- Answer: {'precision': 0.8948004836759371, 'recall': 0.9057527539779682, 'f1': 0.9002433090024331, 'number': 817}
- Header: {'precision': 0.6868686868686869, 'recall': 0.5714285714285714, 'f1': 0.6238532110091742, 'number': 119}
- Question: {'precision': 0.8923212709620476, 'recall': 0.9387186629526463, 'f1': 0.9149321266968325, 'number': 1077}
- Overall Precision: 0.8834
- Overall Recall: 0.9036
- Overall F1: 0.8934
- Overall Accuracy: 0.8096

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Answer                                                                                                   | Header                                                                                                    | Question                                                                                                  | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.409         | 10.53  | 200  | 0.8991          | {'precision': 0.8176855895196506, 'recall': 0.9167686658506732, 'f1': 0.8643969994229659, 'number': 817} | {'precision': 0.5094339622641509, 'recall': 0.453781512605042, 'f1': 0.48, 'number': 119}                 | {'precision': 0.891465677179963, 'recall': 0.8922934076137419, 'f1': 0.8918793503480278, 'number': 1077}  | 0.84              | 0.8763         | 0.8578     | 0.7897           |
| 0.0485        | 21.05  | 400  | 1.1875          | {'precision': 0.8504566210045662, 'recall': 0.9118727050183598, 'f1': 0.8800945067926758, 'number': 817} | {'precision': 0.5691056910569106, 'recall': 0.5882352941176471, 'f1': 0.578512396694215, 'number': 119}   | {'precision': 0.8970315398886828, 'recall': 0.8978644382544104, 'f1': 0.897447795823666, 'number': 1077}  | 0.8580            | 0.8852         | 0.8714     | 0.7935           |
| 0.0139        | 31.58  | 600  | 1.5032          | {'precision': 0.8455377574370709, 'recall': 0.9045287637698899, 'f1': 0.8740390301596689, 'number': 817} | {'precision': 0.6206896551724138, 'recall': 0.6050420168067226, 'f1': 0.6127659574468085, 'number': 119}  | {'precision': 0.9057142857142857, 'recall': 0.883008356545961, 'f1': 0.8942172073342736, 'number': 1077}  | 0.8637            | 0.8753         | 0.8695     | 0.7913           |
| 0.0083        | 42.11  | 800  | 1.4968          | {'precision': 0.8316939890710382, 'recall': 0.9314565483476133, 'f1': 0.8787528868360277, 'number': 817} | {'precision': 0.6363636363636364, 'recall': 0.47058823529411764, 'f1': 0.5410628019323671, 'number': 119} | {'precision': 0.8928909952606635, 'recall': 0.8746518105849582, 'f1': 0.8836772983114447, 'number': 1077} | 0.8547            | 0.8738         | 0.8642     | 0.8017           |
| 0.0058        | 52.63  | 1000 | 1.7837          | {'precision': 0.8385300668151447, 'recall': 0.9216646266829865, 'f1': 0.8781341107871721, 'number': 817} | {'precision': 0.6138613861386139, 'recall': 0.5210084033613446, 'f1': 0.5636363636363637, 'number': 119}  | {'precision': 0.8972667295004713, 'recall': 0.8839368616527391, 'f1': 0.8905519176800748, 'number': 1077} | 0.8578            | 0.8778         | 0.8677     | 0.7914           |
| 0.008         | 63.16  | 1200 | 1.8600          | {'precision': 0.8239130434782609, 'recall': 0.9277845777233782, 'f1': 0.8727691421991941, 'number': 817} | {'precision': 0.5865384615384616, 'recall': 0.5126050420168067, 'f1': 0.5470852017937219, 'number': 119}  | {'precision': 0.9037735849056604, 'recall': 0.8895078922934077, 'f1': 0.8965839962564343, 'number': 1077} | 0.8527            | 0.8828         | 0.8675     | 0.8009           |
| 0.0037        | 73.68  | 1400 | 2.8372          | {'precision': 0.8821428571428571, 'recall': 0.9069767441860465, 'f1': 0.8943874471937237, 'number': 817} | {'precision': 0.5966386554621849, 'recall': 0.5966386554621849, 'f1': 0.5966386554621849, 'number': 119}  | {'precision': 0.8961748633879781, 'recall': 0.9136490250696379, 'f1': 0.9048275862068965, 'number': 1077} | 0.8731            | 0.8922         | 0.8826     | 0.7928           |
| 0.004         | 84.21  | 1600 | 2.8378          | {'precision': 0.881578947368421, 'recall': 0.9020807833537332, 'f1': 0.8917120387174834, 'number': 817}  | {'precision': 0.631578947368421, 'recall': 0.6050420168067226, 'f1': 0.6180257510729613, 'number': 119}   | {'precision': 0.891989198919892, 'recall': 0.9201485608170845, 'f1': 0.9058500914076782, 'number': 1077}  | 0.8734            | 0.8942         | 0.8837     | 0.8079           |
| 0.0018        | 94.74  | 1800 | 3.0272          | {'precision': 0.8742655699177438, 'recall': 0.9106487148102815, 'f1': 0.8920863309352519, 'number': 817} | {'precision': 0.6759259259259259, 'recall': 0.6134453781512605, 'f1': 0.6431718061674008, 'number': 119}  | {'precision': 0.89937106918239, 'recall': 0.9294336118848654, 'f1': 0.9141552511415526, 'number': 1077}   | 0.8774            | 0.9031         | 0.8901     | 0.7992           |
| 0.0008        | 105.26 | 2000 | 2.9565          | {'precision': 0.8948004836759371, 'recall': 0.9057527539779682, 'f1': 0.9002433090024331, 'number': 817} | {'precision': 0.6868686868686869, 'recall': 0.5714285714285714, 'f1': 0.6238532110091742, 'number': 119}  | {'precision': 0.8923212709620476, 'recall': 0.9387186629526463, 'f1': 0.9149321266968325, 'number': 1077} | 0.8834            | 0.9036         | 0.8934     | 0.8096           |
| 0.0008        | 115.79 | 2200 | 3.1429          | {'precision': 0.8411111111111111, 'recall': 0.9265605875152999, 'f1': 0.881770529994176, 'number': 817}  | {'precision': 0.6666666666666666, 'recall': 0.5546218487394958, 'f1': 0.6055045871559633, 'number': 119}  | {'precision': 0.9147141518275539, 'recall': 0.9062209842154132, 'f1': 0.9104477611940299, 'number': 1077} | 0.8708            | 0.8937         | 0.8821     | 0.7970           |
| 0.0005        | 126.32 | 2400 | 3.0269          | {'precision': 0.8617511520737328, 'recall': 0.9155446756425949, 'f1': 0.8878338278931751, 'number': 817} | {'precision': 0.6952380952380952, 'recall': 0.6134453781512605, 'f1': 0.6517857142857143, 'number': 119}  | {'precision': 0.906871609403255, 'recall': 0.9312906220984215, 'f1': 0.9189189189189189, 'number': 1077}  | 0.8773            | 0.9061         | 0.8915     | 0.7994           |


### Framework versions

- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3