File size: 7,889 Bytes
f7b1cba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
model-index:
- name: lilt-en-funsd
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lilt-en-funsd

This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7648
- Answer: {'precision': 0.8672150411280846, 'recall': 0.9033047735618115, 'f1': 0.8848920863309352, 'number': 817}
- Header: {'precision': 0.6666666666666666, 'recall': 0.48739495798319327, 'f1': 0.5631067961165048, 'number': 119}
- Question: {'precision': 0.9055912007332723, 'recall': 0.9173630454967502, 'f1': 0.911439114391144, 'number': 1077}
- Overall Precision: 0.8793
- Overall Recall: 0.8862
- Overall F1: 0.8827
- Overall Accuracy: 0.8009

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Answer                                                                                                   | Header                                                                                                     | Question                                                                                                  | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.437         | 10.53  | 200  | 0.9767          | {'precision': 0.8161180476730987, 'recall': 0.8800489596083231, 'f1': 0.8468786808009423, 'number': 817} | {'precision': 0.4875, 'recall': 0.3277310924369748, 'f1': 0.39195979899497485, 'number': 119}              | {'precision': 0.8519480519480519, 'recall': 0.9136490250696379, 'f1': 0.8817204301075269, 'number': 1077} | 0.8233            | 0.8654         | 0.8438     | 0.7825           |
| 0.0411        | 21.05  | 400  | 1.3412          | {'precision': 0.8213507625272332, 'recall': 0.9228886168910648, 'f1': 0.869164265129683, 'number': 817}  | {'precision': 0.48936170212765956, 'recall': 0.5798319327731093, 'f1': 0.5307692307692307, 'number': 119}  | {'precision': 0.9178217821782179, 'recall': 0.8607242339832869, 'f1': 0.8883564925730715, 'number': 1077} | 0.8458            | 0.8693         | 0.8574     | 0.8058           |
| 0.013         | 31.58  | 600  | 1.3818          | {'precision': 0.8340807174887892, 'recall': 0.9106487148102815, 'f1': 0.8706846108835576, 'number': 817} | {'precision': 0.5595238095238095, 'recall': 0.3949579831932773, 'f1': 0.4630541871921182, 'number': 119}   | {'precision': 0.8754355400696864, 'recall': 0.9331476323119777, 'f1': 0.903370786516854, 'number': 1077}  | 0.8456            | 0.8922         | 0.8683     | 0.8002           |
| 0.0079        | 42.11  | 800  | 1.5417          | {'precision': 0.8312849162011173, 'recall': 0.9106487148102815, 'f1': 0.869158878504673, 'number': 817}  | {'precision': 0.5789473684210527, 'recall': 0.5546218487394958, 'f1': 0.5665236051502146, 'number': 119}   | {'precision': 0.883441258094357, 'recall': 0.8867223769730733, 'f1': 0.8850787766450416, 'number': 1077}  | 0.8445            | 0.8768         | 0.8603     | 0.7787           |
| 0.0042        | 52.63  | 1000 | 1.7697          | {'precision': 0.8411111111111111, 'recall': 0.9265605875152999, 'f1': 0.881770529994176, 'number': 817}  | {'precision': 0.6363636363636364, 'recall': 0.35294117647058826, 'f1': 0.4540540540540541, 'number': 119}  | {'precision': 0.8674176776429809, 'recall': 0.9294336118848654, 'f1': 0.897355445988346, 'number': 1077}  | 0.8491            | 0.8942         | 0.8710     | 0.7868           |
| 0.0025        | 63.16  | 1200 | 1.6700          | {'precision': 0.8520231213872832, 'recall': 0.9020807833537332, 'f1': 0.8763376932223542, 'number': 817} | {'precision': 0.5434782608695652, 'recall': 0.42016806722689076, 'f1': 0.4739336492890995, 'number': 119}  | {'precision': 0.8812330009066183, 'recall': 0.9025069637883009, 'f1': 0.891743119266055, 'number': 1077}  | 0.8539            | 0.8738         | 0.8637     | 0.7795           |
| 0.0013        | 73.68  | 1400 | 1.8217          | {'precision': 0.8444193912063134, 'recall': 0.9167686658506732, 'f1': 0.8791079812206573, 'number': 817} | {'precision': 0.5813953488372093, 'recall': 0.42016806722689076, 'f1': 0.48780487804878053, 'number': 119} | {'precision': 0.8940639269406393, 'recall': 0.9090064995357474, 'f1': 0.9014732965009209, 'number': 1077} | 0.8598            | 0.8833         | 0.8714     | 0.7878           |
| 0.0007        | 84.21  | 1600 | 1.7507          | {'precision': 0.8437146092865232, 'recall': 0.9118727050183598, 'f1': 0.8764705882352941, 'number': 817} | {'precision': 0.6794871794871795, 'recall': 0.44537815126050423, 'f1': 0.5380710659898478, 'number': 119}  | {'precision': 0.8888888888888888, 'recall': 0.9210770659238626, 'f1': 0.9046967624259006, 'number': 1077} | 0.8618            | 0.8892         | 0.8753     | 0.7901           |
| 0.0006        | 94.74  | 1800 | 1.7257          | {'precision': 0.8539976825028969, 'recall': 0.9020807833537332, 'f1': 0.8773809523809523, 'number': 817} | {'precision': 0.6344086021505376, 'recall': 0.4957983193277311, 'f1': 0.5566037735849056, 'number': 119}   | {'precision': 0.8943014705882353, 'recall': 0.903435468895079, 'f1': 0.8988452655889145, 'number': 1077}  | 0.8655            | 0.8788         | 0.8721     | 0.7922           |
| 0.0004        | 105.26 | 2000 | 1.7648          | {'precision': 0.8672150411280846, 'recall': 0.9033047735618115, 'f1': 0.8848920863309352, 'number': 817} | {'precision': 0.6666666666666666, 'recall': 0.48739495798319327, 'f1': 0.5631067961165048, 'number': 119}  | {'precision': 0.9055912007332723, 'recall': 0.9173630454967502, 'f1': 0.911439114391144, 'number': 1077}  | 0.8793            | 0.8862         | 0.8827     | 0.8009           |
| 0.0003        | 115.79 | 2200 | 1.7698          | {'precision': 0.8616279069767442, 'recall': 0.9069767441860465, 'f1': 0.8837209302325582, 'number': 817} | {'precision': 0.6373626373626373, 'recall': 0.48739495798319327, 'f1': 0.5523809523809524, 'number': 119}  | {'precision': 0.9104339796860572, 'recall': 0.9155060352831941, 'f1': 0.9129629629629629, 'number': 1077} | 0.8776            | 0.8867         | 0.8821     | 0.8007           |
| 0.0003        | 126.32 | 2400 | 1.7623          | {'precision': 0.8596287703016241, 'recall': 0.9069767441860465, 'f1': 0.882668254913639, 'number': 817}  | {'precision': 0.5769230769230769, 'recall': 0.5042016806722689, 'f1': 0.5381165919282511, 'number': 119}   | {'precision': 0.9018348623853211, 'recall': 0.9127205199628597, 'f1': 0.9072450392247347, 'number': 1077} | 0.8677            | 0.8862         | 0.8769     | 0.7973           |


### Framework versions

- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0