File size: 5,318 Bytes
d31fdbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87d5135
d31fdbd
 
 
 
52354a3
d31fdbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17c0eab
d31fdbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
---
language: 
- pt
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: checkpoints
  results:
  - task:
      name: Token Classification
      type: token-classification
    dataset:
      name: lener_br
      type: lener_br
    metrics:
    - name: F1
      type: f1
      value: 0.8716487228203504
    - name: Precision
      type: precision
      value: 0.8559286898839138
    - name: Recall
      type: recall
      value: 0.8879569892473118
    - name: Accuracy
      type: accuracy
      value: 0.9755893153732458
    - name: Loss
      type: loss
      value: 0.1133928969502449
widget:
- text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
---

## (BERT base) NER model in the legal domain in Portuguese (LeNER-Br)

**ner-bert-base-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on 16/12/2021 in Google Colab from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER_br](https://huggingface.co/datasets/lener_br) by using a NER objective.

Note: due to the small size of BERTimbau base and finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset (*note: see the paragraph "Validation metrics by Named Entity" to get detailed metrics*):
  - **f1**: 0.8716487228203504
  - **precision**: 0.8559286898839138
  - **recall**: 0.8879569892473118
  - **accuracy**: 0.9755893153732458
  - **loss**: 0.1133928969502449
  
## Widget & APP

You can test this model into the widget of this page.

## Using the model for inference in production
````
# install pytorch: check https://pytorch.org/
# !pip install transformers 
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch

# parameters
model_name = "ner-bert-base-portuguese-cased-lenebr"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

input_text = "EMENTA: APELAÇÃO CÍVEL - AÇÃO DE INDENIZAÇÃO POR DANOS MORAIS - PRELIMINAR - ARGUIDA PELO MINISTÉRIO PÚBLICO EM GRAU RECURSAL - NULIDADE - AUSÊNCIA DE IN- TERVENÇÃO DO PARQUET NA INSTÂNCIA A QUO - PRESENÇA DE INCAPAZ - PREJUÍZO EXISTENTE - PRELIMINAR ACOLHIDA - NULIDADE RECONHECIDA."

# tokenization
inputs = tokenizer(input_text, max_length=512, truncation=True, return_tensors="pt")
tokens = inputs.tokens()

# get predictions
outputs = model(**inputs).logits
predictions = torch.argmax(outputs, dim=2)

# print predictions
for token, prediction in zip(tokens, predictions[0].numpy()):
    print((token, model.config.id2label[prediction]))
````
You can use pipeline, too. However, it seems to have an issue regarding to the max_length of the input sequence.
````
!pip install transformers
import transformers
from transformers import pipeline

model_name = "ner-bert-base-portuguese-cased-lenebr"

ner = pipeline(
    "ner",
    model=model_name
) 

ner(input_text)
````
## Training procedure

### Training results

````
Num examples = 7828
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 2937

Step	Training Loss	Validation Loss	Precision	Recall    	F1      	Accuracy
290 	0.315100     	0.141881       	0.764542 	0.709462  	0.735973	0.960550
580	 0.089100	     0.137700       	0.729155 	0.810538  	0.767695	0.959940
870	 0.071700	     0.122069       	0.780277 	0.872903  	0.823995	0.967955
1160	0.047500     	0.125950       	0.800312 	0.881720  	0.839046	0.968367
1450	0.034900     	0.129228       	0.763666 	0.910323  	0.830570	0.969068
1740	0.036100     	0.113393       	0.855929 	0.887957  	0.871649	0.975589
2030	0.037800     	0.121275       	0.817230 	0.889462  	0.851818	0.970393
2320	0.018700     	0.115745       	0.836066 	0.877419  	0.856243	0.973136
2610	0.017100     	0.118826       	0.822488 	0.888817  	0.854367	0.973471
````

### Validation metrics by Named Entity
````
Num examples = 1177

{'JURISPRUDENCIA': {'f1': 0.6641509433962263,
  'number': 657,
  'precision': 0.6586826347305389,
  'recall': 0.669710806697108},
 'LEGISLACAO': {'f1': 0.8489082969432314,
  'number': 571,
  'precision': 0.8466898954703833,
  'recall': 0.851138353765324},
 'LOCAL': {'f1': 0.8066037735849058,
  'number': 194,
  'precision': 0.7434782608695653,
  'recall': 0.8814432989690721},
 'ORGANIZACAO': {'f1': 0.8540462427745664,
  'number': 1340,
  'precision': 0.8277310924369747,
  'recall': 0.8820895522388059},
 'PESSOA': {'f1': 0.9845722300140253,
  'number': 1072,
  'precision': 0.9868791002811621,
  'recall': 0.9822761194029851},
 'TEMPO': {'f1': 0.9527794381350867,
  'number': 816,
  'precision': 0.9299883313885647,
  'recall': 0.9767156862745098},
 'overall_accuracy': 0.9755893153732458,
 'overall_f1': 0.8716487228203504,
 'overall_precision': 0.8559286898839138,
 'overall_recall': 0.8879569892473118}
 ````