jeanpoll
commited on
Commit
•
12cca13
1
Parent(s):
d1834bb
initial commit
Browse files- README.md +120 -0
- config.json +40 -0
- merges.txt +0 -0
- pytorch_model.bin +3 -0
- results.csv +6 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- conll2003
|
5 |
+
widget:
|
6 |
+
- text: "My name is jean-baptiste and I live in montreal"
|
7 |
+
- text: "My name is clara and I live in berkeley, california."
|
8 |
+
- text: "My name is wolfgang and I live in berlin"
|
9 |
+
|
10 |
+
---
|
11 |
+
|
12 |
+
# roberta-large-ner: model fine-tuned from roberta-large for NER task
|
13 |
+
|
14 |
+
## Introduction
|
15 |
+
|
16 |
+
[roberta-large-ner] is a NER model that was fine-tuned from roberta-large on conll2003 dataset.
|
17 |
+
Model was validated on emails/chat data and outperformed other models on this type of data specifically.
|
18 |
+
In particular the model seems to work better on entity that don't start with an upper case.
|
19 |
+
|
20 |
+
|
21 |
+
## Training data
|
22 |
+
|
23 |
+
Training data was classified as follow:
|
24 |
+
|
25 |
+
Abbreviation|Description
|
26 |
+
-|-
|
27 |
+
O| Outside of a named entity
|
28 |
+
MISC | Miscellaneous entity
|
29 |
+
PER | Person’s name
|
30 |
+
ORG | Organization
|
31 |
+
LOC | Location
|
32 |
+
|
33 |
+
In order to simplify, the prefix B- or I- from original conll2003 was removed.
|
34 |
+
I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size:
|
35 |
+
Train | 17494
|
36 |
+
Validation | 3250
|
37 |
+
|
38 |
+
## How to use camembert-ner with HuggingFace
|
39 |
+
|
40 |
+
##### Load camembert-ner and its sub-word tokenizer :
|
41 |
+
|
42 |
+
```python
|
43 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
44 |
+
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner")
|
46 |
+
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner")
|
47 |
+
|
48 |
+
|
49 |
+
##### Process text sample (from wikipedia)
|
50 |
+
|
51 |
+
from transformers import pipeline
|
52 |
+
|
53 |
+
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
|
54 |
+
nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer")
|
55 |
+
|
56 |
+
|
57 |
+
[{'entity_group': 'ORG',
|
58 |
+
'score': 0.99381506,
|
59 |
+
'word': ' Apple',
|
60 |
+
'start': 0,
|
61 |
+
'end': 5},
|
62 |
+
{'entity_group': 'PER',
|
63 |
+
'score': 0.99970853,
|
64 |
+
'word': ' Steve Jobs',
|
65 |
+
'start': 29,
|
66 |
+
'end': 39},
|
67 |
+
{'entity_group': 'PER',
|
68 |
+
'score': 0.99981767,
|
69 |
+
'word': ' Steve Wozniak',
|
70 |
+
'start': 41,
|
71 |
+
'end': 54},
|
72 |
+
{'entity_group': 'PER',
|
73 |
+
'score': 0.99956465,
|
74 |
+
'word': ' Ronald Wayne',
|
75 |
+
'start': 59,
|
76 |
+
'end': 71},
|
77 |
+
{'entity_group': 'PER',
|
78 |
+
'score': 0.9997918,
|
79 |
+
'word': ' Wozniak',
|
80 |
+
'start': 92,
|
81 |
+
'end': 99},
|
82 |
+
{'entity_group': 'MISC',
|
83 |
+
'score': 0.99956393,
|
84 |
+
'word': ' Apple I',
|
85 |
+
'start': 102,
|
86 |
+
'end': 109}]
|
87 |
+
```
|
88 |
+
|
89 |
+
|
90 |
+
## Model performances
|
91 |
+
|
92 |
+
Model performances computed on conll2003 validation dataset (computed on the tokens predictions)
|
93 |
+
```
|
94 |
+
entity | precision | recall | f1
|
95 |
+
- | - | - | -
|
96 |
+
PER | 0.9914 | 0.9927 | 0.9920
|
97 |
+
ORG | 0.9627 | 0.9661 | 0.9644
|
98 |
+
LOC | 0.9795 | 0.9862 | 0.9828
|
99 |
+
MISC | 0.9292 | 0.9262 | 0.9277
|
100 |
+
Overall | 0.9740 | 0.9766 | 0.9753
|
101 |
+
```
|
102 |
+
|
103 |
+
On private dataset (email, chat, informal discussion), computed on word predictions:
|
104 |
+
```
|
105 |
+
entity | precision | recall | f1
|
106 |
+
- | - | - | -
|
107 |
+
PER | 0.8823 | 0.9116 | 0.8967
|
108 |
+
ORG | 0.7694 | 0.7292 | 0.7487
|
109 |
+
LOC | 0.8619 | 0.7768 | 0.8171
|
110 |
+
```
|
111 |
+
|
112 |
+
Spacy (en_core_web_trf-3.2.0) on the same private dataset was giving:
|
113 |
+
```
|
114 |
+
entity | precision | recall | f1
|
115 |
+
- | - | - | -
|
116 |
+
PER | 0.9146 | 0.8287 | 0.8695
|
117 |
+
ORG | 0.7655 | 0.6437 | 0.6993
|
118 |
+
LOC | 0.8727 | 0.6180 | 0.7236
|
119 |
+
```
|
120 |
+
|
config.json
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "roberta-large",
|
3 |
+
"architectures": [
|
4 |
+
"RobertaForTokenClassification"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"gradient_checkpointing": false,
|
10 |
+
"hidden_act": "gelu",
|
11 |
+
"hidden_dropout_prob": 0.1,
|
12 |
+
"hidden_size": 1024,
|
13 |
+
"id2label": {
|
14 |
+
"0": "O",
|
15 |
+
"1": "LOC",
|
16 |
+
"2": "PER",
|
17 |
+
"3": "MISC",
|
18 |
+
"4": "ORG"
|
19 |
+
},
|
20 |
+
"initializer_range": 0.02,
|
21 |
+
"intermediate_size": 4096,
|
22 |
+
"label2id": {
|
23 |
+
"LOC": 1,
|
24 |
+
"MISC": 3,
|
25 |
+
"O": 0,
|
26 |
+
"ORG": 4,
|
27 |
+
"PER": 2
|
28 |
+
},
|
29 |
+
"layer_norm_eps": 1e-05,
|
30 |
+
"max_position_embeddings": 514,
|
31 |
+
"model_type": "roberta",
|
32 |
+
"num_attention_heads": 16,
|
33 |
+
"num_hidden_layers": 24,
|
34 |
+
"pad_token_id": 1,
|
35 |
+
"position_embedding_type": "absolute",
|
36 |
+
"transformers_version": "4.3.2",
|
37 |
+
"type_vocab_size": 1,
|
38 |
+
"use_cache": true,
|
39 |
+
"vocab_size": 50265
|
40 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0e77a9cef4873df5643217b672929b3f8d3113b4a177bf593096d7b9db7e03f4
|
3 |
+
size 1417433007
|
results.csv
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
,precision,recall,f1,entity
|
2 |
+
0,0.9795249795249795,0.9862561847168774,0.9828790576633339,LOC
|
3 |
+
1,0.9914318668643928,0.9927404718693285,0.9920857378400659,PER
|
4 |
+
2,0.9292274446245273,0.9262250942380184,0.9277238403451995,MISC
|
5 |
+
3,0.9627007895453308,0.966120218579235,0.9644074730669576,ORG
|
6 |
+
4,0.9740825890497252,0.9766692954784437,0.9753719894698967,Overall
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "add_prefix_space": true, "errors": "replace", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": "<mask>", "model_max_length": 512, "name_or_path": "roberta-large"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|