begnini commited on
Commit
bfe9e2c
1 Parent(s): e22c029

contratos_tceal

Browse files
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: pierreguillou/ner-bert-large-cased-pt-lenerbr
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - contratos_tceal
7
+ metrics:
8
+ - precision
9
+ - recall
10
+ - f1
11
+ - accuracy
12
+ model-index:
13
+ - name: ner-bert-large-cased-pt-contratos_tceal
14
+ results:
15
+ - task:
16
+ name: Token Classification
17
+ type: token-classification
18
+ dataset:
19
+ name: contratos_tceal
20
+ type: contratos_tceal
21
+ config: contratos_tceal
22
+ split: validation
23
+ args: contratos_tceal
24
+ metrics:
25
+ - name: Precision
26
+ type: precision
27
+ value: 0.863676600767188
28
+ - name: Recall
29
+ type: recall
30
+ value: 0.8834892846362813
31
+ - name: F1
32
+ type: f1
33
+ value: 0.8734706057893166
34
+ - name: Accuracy
35
+ type: accuracy
36
+ value: 0.9145210809496307
37
+ ---
38
+
39
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
40
+ should probably proofread and complete it, then remove this comment. -->
41
+
42
+ # ner-bert-large-cased-pt-contratos_tceal
43
+
44
+ This model is a fine-tuned version of [pierreguillou/ner-bert-large-cased-pt-lenerbr](https://huggingface.co/pierreguillou/ner-bert-large-cased-pt-lenerbr) on the contratos_tceal dataset.
45
+ It achieves the following results on the evaluation set:
46
+ - Loss: nan
47
+ - Precision: 0.8637
48
+ - Recall: 0.8835
49
+ - F1: 0.8735
50
+ - Accuracy: 0.9145
51
+
52
+ ## Model description
53
+
54
+ More information needed
55
+
56
+ ## Intended uses & limitations
57
+
58
+ More information needed
59
+
60
+ ## Training and evaluation data
61
+
62
+ More information needed
63
+
64
+ ## Training procedure
65
+
66
+ ### Training hyperparameters
67
+
68
+ The following hyperparameters were used during training:
69
+ - learning_rate: 2e-05
70
+ - train_batch_size: 4
71
+ - eval_batch_size: 4
72
+ - seed: 42
73
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
74
+ - lr_scheduler_type: linear
75
+ - num_epochs: 20
76
+
77
+ ### Training results
78
+
79
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
80
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
81
+ | No log | 1.0 | 191 | nan | 0.3715 | 0.3350 | 0.3523 | 0.5279 |
82
+ | No log | 2.0 | 382 | nan | 0.4724 | 0.5889 | 0.5243 | 0.6579 |
83
+ | 2.0083 | 3.0 | 573 | nan | 0.6952 | 0.7483 | 0.7207 | 0.8521 |
84
+ | 2.0083 | 4.0 | 764 | nan | 0.7303 | 0.7911 | 0.7595 | 0.8775 |
85
+ | 2.0083 | 5.0 | 955 | nan | 0.7914 | 0.8005 | 0.7959 | 0.8917 |
86
+ | 0.4952 | 6.0 | 1146 | nan | 0.8545 | 0.8702 | 0.8623 | 0.9099 |
87
+ | 0.4952 | 7.0 | 1337 | nan | 0.8507 | 0.8775 | 0.8639 | 0.9086 |
88
+ | 0.2482 | 8.0 | 1528 | nan | 0.8530 | 0.8708 | 0.8618 | 0.9085 |
89
+ | 0.2482 | 9.0 | 1719 | nan | 0.8546 | 0.8744 | 0.8644 | 0.9108 |
90
+ | 0.2482 | 10.0 | 1910 | nan | 0.8563 | 0.8720 | 0.8641 | 0.9105 |
91
+ | 0.169 | 11.0 | 2101 | nan | 0.8632 | 0.8741 | 0.8686 | 0.9092 |
92
+ | 0.169 | 12.0 | 2292 | nan | 0.8640 | 0.8805 | 0.8722 | 0.9089 |
93
+ | 0.169 | 13.0 | 2483 | nan | 0.8598 | 0.8756 | 0.8677 | 0.9096 |
94
+ | 0.1255 | 14.0 | 2674 | nan | 0.8622 | 0.8799 | 0.8709 | 0.9121 |
95
+ | 0.1255 | 15.0 | 2865 | nan | 0.8603 | 0.8814 | 0.8707 | 0.9113 |
96
+ | 0.0942 | 16.0 | 3056 | nan | 0.8612 | 0.8787 | 0.8699 | 0.9114 |
97
+ | 0.0942 | 17.0 | 3247 | nan | 0.8626 | 0.8793 | 0.8709 | 0.9133 |
98
+ | 0.0942 | 18.0 | 3438 | nan | 0.8640 | 0.8823 | 0.8731 | 0.9132 |
99
+ | 0.0795 | 19.0 | 3629 | nan | 0.8608 | 0.8808 | 0.8707 | 0.9139 |
100
+ | 0.0795 | 20.0 | 3820 | nan | 0.8637 | 0.8835 | 0.8735 | 0.9145 |
101
+
102
+
103
+ ### Framework versions
104
+
105
+ - Transformers 4.36.1
106
+ - Pytorch 2.1.0+cu121
107
+ - Datasets 2.15.0
108
+ - Tokenizers 0.15.0
config.json ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pierreguillou/ner-bert-large-cased-pt-lenerbr",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "directionality": "bidi",
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 1024,
12
+ "id2label": {
13
+ "0": "B-CNPJ_CONTRATADA",
14
+ "1": "I-CNPJ_CONTRATADA",
15
+ "10": "L-CRITERIO_JULGAMENTO",
16
+ "11": "U-CRITERIO_JULGAMENTO",
17
+ "12": "B-DATA_ATO",
18
+ "13": "I-DATA_ATO",
19
+ "14": "L-DATA_ATO",
20
+ "15": "U-DATA_ATO",
21
+ "16": "B-DATA_INICIO",
22
+ "17": "I-DATA_INICIO",
23
+ "18": "L-DATA_INICIO",
24
+ "19": "U-DATA_INICIO",
25
+ "2": "L-CNPJ_CONTRATADA",
26
+ "20": "B-DATA_REALIZACAO",
27
+ "21": "I-DATA_REALIZACAO",
28
+ "22": "L-DATA_REALIZACAO",
29
+ "23": "U-DATA_REALIZACAO",
30
+ "24": "B-DOTACAO",
31
+ "25": "I-DOTACAO",
32
+ "26": "L-DOTACAO",
33
+ "27": "U-DOTACAO",
34
+ "28": "B-ELEMENTO_DESPESA",
35
+ "29": "I-ELEMENTO_DESPESA",
36
+ "3": "U-CNPJ_CONTRATADA",
37
+ "30": "L-ELEMENTO_DESPESA",
38
+ "31": "U-ELEMENTO_DESPESA",
39
+ "32": "B-EMPRESA_VENCEDORA",
40
+ "33": "I-EMPRESA_VENCEDORA",
41
+ "34": "L-EMPRESA_VENCEDORA",
42
+ "35": "U-EMPRESA_VENCEDORA",
43
+ "36": "B-FUNDAMENTACAO_LEGAL",
44
+ "37": "I-FUNDAMENTACAO_LEGAL",
45
+ "38": "L-FUNDAMENTACAO_LEGAL",
46
+ "39": "U-FUNDAMENTACAO_LEGAL",
47
+ "4": "B-CNPJ_CONTRATANTE",
48
+ "40": "B-INFORMACOES",
49
+ "41": "I-INFORMACOES",
50
+ "42": "L-INFORMACOES",
51
+ "43": "U-INFORMACOES",
52
+ "44": "B-MODALIDADE",
53
+ "45": "I-MODALIDADE",
54
+ "46": "L-MODALIDADE",
55
+ "47": "U-MODALIDADE",
56
+ "48": "B-NUMERO_ATO",
57
+ "49": "I-NUMERO_ATO",
58
+ "5": "I-CNPJ_CONTRATANTE",
59
+ "50": "L-NUMERO_ATO",
60
+ "51": "U-NUMERO_ATO",
61
+ "52": "B-NUMERO_CONTRATO",
62
+ "53": "I-NUMERO_CONTRATO",
63
+ "54": "L-NUMERO_CONTRATO",
64
+ "55": "U-NUMERO_CONTRATO",
65
+ "56": "B-NUMERO_EDITAL",
66
+ "57": "L-NUMERO_EDITAL",
67
+ "58": "U-NUMERO_EDITAL",
68
+ "59": "B-NUMERO_LICITACAO",
69
+ "6": "L-CNPJ_CONTRATANTE",
70
+ "60": "I-NUMERO_LICITACAO",
71
+ "61": "L-NUMERO_LICITACAO",
72
+ "62": "U-NUMERO_LICITACAO",
73
+ "63": "B-NUMERO_PROCESSO",
74
+ "64": "I-NUMERO_PROCESSO",
75
+ "65": "L-NUMERO_PROCESSO",
76
+ "66": "U-NUMERO_PROCESSO",
77
+ "67": "O",
78
+ "68": "B-OBJETO",
79
+ "69": "I-OBJETO",
80
+ "7": "U-CNPJ_CONTRATANTE",
81
+ "70": "L-OBJETO",
82
+ "71": "U-OBJETO",
83
+ "72": "B-ORGAO",
84
+ "73": "I-ORGAO",
85
+ "74": "L-ORGAO",
86
+ "75": "B-ORGAO_CONTRATANTE",
87
+ "76": "I-ORGAO_CONTRATANTE",
88
+ "77": "L-ORGAO_CONTRATANTE",
89
+ "78": "U-ORGAO_CONTRATANTE",
90
+ "79": "B-PRAZO",
91
+ "8": "B-CRITERIO_JULGAMENTO",
92
+ "80": "I-PRAZO",
93
+ "81": "L-PRAZO",
94
+ "82": "U-PRAZO",
95
+ "83": "B-TIPO",
96
+ "84": "I-TIPO",
97
+ "85": "L-TIPO",
98
+ "86": "U-TIPO",
99
+ "87": "B-TIPO_PUBLICACAO",
100
+ "88": "I-TIPO_PUBLICACAO",
101
+ "89": "L-TIPO_PUBLICACAO",
102
+ "9": "I-CRITERIO_JULGAMENTO",
103
+ "90": "U-TIPO_PUBLICACAO",
104
+ "91": "B-VALOR",
105
+ "92": "I-VALOR",
106
+ "93": "L-VALOR",
107
+ "94": "U-VALOR",
108
+ "95": "B-VIGENCIA",
109
+ "96": "I-VIGENCIA",
110
+ "97": "L-VIGENCIA",
111
+ "98": "U-VIGENCIA"
112
+ },
113
+ "initializer_range": 0.02,
114
+ "intermediate_size": 4096,
115
+ "label2id": {
116
+ "B-CNPJ_CONTRATADA": 0,
117
+ "B-CNPJ_CONTRATANTE": 4,
118
+ "B-CRITERIO_JULGAMENTO": 8,
119
+ "B-DATA_ATO": 12,
120
+ "B-DATA_INICIO": 16,
121
+ "B-DATA_REALIZACAO": 20,
122
+ "B-DOTACAO": 24,
123
+ "B-ELEMENTO_DESPESA": 28,
124
+ "B-EMPRESA_VENCEDORA": 32,
125
+ "B-FUNDAMENTACAO_LEGAL": 36,
126
+ "B-INFORMACOES": 40,
127
+ "B-MODALIDADE": 44,
128
+ "B-NUMERO_ATO": 48,
129
+ "B-NUMERO_CONTRATO": 52,
130
+ "B-NUMERO_EDITAL": 56,
131
+ "B-NUMERO_LICITACAO": 59,
132
+ "B-NUMERO_PROCESSO": 63,
133
+ "B-OBJETO": 68,
134
+ "B-ORGAO": 72,
135
+ "B-ORGAO_CONTRATANTE": 75,
136
+ "B-PRAZO": 79,
137
+ "B-TIPO": 83,
138
+ "B-TIPO_PUBLICACAO": 87,
139
+ "B-VALOR": 91,
140
+ "B-VIGENCIA": 95,
141
+ "I-CNPJ_CONTRATADA": 1,
142
+ "I-CNPJ_CONTRATANTE": 5,
143
+ "I-CRITERIO_JULGAMENTO": 9,
144
+ "I-DATA_ATO": 13,
145
+ "I-DATA_INICIO": 17,
146
+ "I-DATA_REALIZACAO": 21,
147
+ "I-DOTACAO": 25,
148
+ "I-ELEMENTO_DESPESA": 29,
149
+ "I-EMPRESA_VENCEDORA": 33,
150
+ "I-FUNDAMENTACAO_LEGAL": 37,
151
+ "I-INFORMACOES": 41,
152
+ "I-MODALIDADE": 45,
153
+ "I-NUMERO_ATO": 49,
154
+ "I-NUMERO_CONTRATO": 53,
155
+ "I-NUMERO_LICITACAO": 60,
156
+ "I-NUMERO_PROCESSO": 64,
157
+ "I-OBJETO": 69,
158
+ "I-ORGAO": 73,
159
+ "I-ORGAO_CONTRATANTE": 76,
160
+ "I-PRAZO": 80,
161
+ "I-TIPO": 84,
162
+ "I-TIPO_PUBLICACAO": 88,
163
+ "I-VALOR": 92,
164
+ "I-VIGENCIA": 96,
165
+ "L-CNPJ_CONTRATADA": 2,
166
+ "L-CNPJ_CONTRATANTE": 6,
167
+ "L-CRITERIO_JULGAMENTO": 10,
168
+ "L-DATA_ATO": 14,
169
+ "L-DATA_INICIO": 18,
170
+ "L-DATA_REALIZACAO": 22,
171
+ "L-DOTACAO": 26,
172
+ "L-ELEMENTO_DESPESA": 30,
173
+ "L-EMPRESA_VENCEDORA": 34,
174
+ "L-FUNDAMENTACAO_LEGAL": 38,
175
+ "L-INFORMACOES": 42,
176
+ "L-MODALIDADE": 46,
177
+ "L-NUMERO_ATO": 50,
178
+ "L-NUMERO_CONTRATO": 54,
179
+ "L-NUMERO_EDITAL": 57,
180
+ "L-NUMERO_LICITACAO": 61,
181
+ "L-NUMERO_PROCESSO": 65,
182
+ "L-OBJETO": 70,
183
+ "L-ORGAO": 74,
184
+ "L-ORGAO_CONTRATANTE": 77,
185
+ "L-PRAZO": 81,
186
+ "L-TIPO": 85,
187
+ "L-TIPO_PUBLICACAO": 89,
188
+ "L-VALOR": 93,
189
+ "L-VIGENCIA": 97,
190
+ "O": 67,
191
+ "U-CNPJ_CONTRATADA": 3,
192
+ "U-CNPJ_CONTRATANTE": 7,
193
+ "U-CRITERIO_JULGAMENTO": 11,
194
+ "U-DATA_ATO": 15,
195
+ "U-DATA_INICIO": 19,
196
+ "U-DATA_REALIZACAO": 23,
197
+ "U-DOTACAO": 27,
198
+ "U-ELEMENTO_DESPESA": 31,
199
+ "U-EMPRESA_VENCEDORA": 35,
200
+ "U-FUNDAMENTACAO_LEGAL": 39,
201
+ "U-INFORMACOES": 43,
202
+ "U-MODALIDADE": 47,
203
+ "U-NUMERO_ATO": 51,
204
+ "U-NUMERO_CONTRATO": 55,
205
+ "U-NUMERO_EDITAL": 58,
206
+ "U-NUMERO_LICITACAO": 62,
207
+ "U-NUMERO_PROCESSO": 66,
208
+ "U-OBJETO": 71,
209
+ "U-ORGAO_CONTRATANTE": 78,
210
+ "U-PRAZO": 82,
211
+ "U-TIPO": 86,
212
+ "U-TIPO_PUBLICACAO": 90,
213
+ "U-VALOR": 94,
214
+ "U-VIGENCIA": 98
215
+ },
216
+ "layer_norm_eps": 1e-12,
217
+ "max_position_embeddings": 512,
218
+ "model_type": "bert",
219
+ "num_attention_heads": 16,
220
+ "num_hidden_layers": 24,
221
+ "output_past": true,
222
+ "pad_token_id": 0,
223
+ "pooler_fc_size": 768,
224
+ "pooler_num_attention_heads": 12,
225
+ "pooler_num_fc_layers": 3,
226
+ "pooler_size_per_head": 128,
227
+ "pooler_type": "first_token_transform",
228
+ "position_embedding_type": "absolute",
229
+ "torch_dtype": "float32",
230
+ "transformers_version": "4.36.1",
231
+ "type_vocab_size": 2,
232
+ "use_cache": true,
233
+ "vocab_size": 29794
234
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab31bd735c612c29c64101160cca69d8654a37803052b24d198d17fa30ad4748
3
+ size 1333839980
runs/Dec15_21-35-15_7ccf83c50d65/events.out.tfevents.1702676116.7ccf83c50d65.1601.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a7c5755c1c5b4adc72a92e9a435210ff21aef8e15315f904fcd17e3dec056e8
3
+ size 20889
runs/Dec15_21-35-15_7ccf83c50d65/events.out.tfevents.1702677760.7ccf83c50d65.1601.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f20be7c3c1a74574b8af1101eeb9f10a3f454a106f7621b6d8262dd83f279824
3
+ size 560
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": false,
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "stride": 0,
55
+ "strip_accents": null,
56
+ "tokenize_chinese_chars": true,
57
+ "tokenizer_class": "BertTokenizer",
58
+ "truncation_side": "right",
59
+ "truncation_strategy": "longest_first",
60
+ "unk_token": "[UNK]"
61
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89cd26eeb2a5da9fdc85a119488bcd860b0fe4b1a341e6edfe15baf4eb108752
3
+ size 4728
vocab.txt ADDED
The diff for this file is too large to render. See raw diff