DunnBC22 commited on
Commit
baa1085
·
1 Parent(s): ccebf00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -25
README.md CHANGED
@@ -5,40 +5,80 @@ tags:
5
  model-index:
6
  - name: bert-base-cased-finetuned-ner-DFKI-SLT_few-NERd
7
  results: []
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
  # bert-base-cased-finetuned-ner-DFKI-SLT_few-NERd
14
 
15
- This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
 
16
  It achieves the following results on the evaluation set:
17
  - Loss: 0.1312
18
- - Erson: {'precision': 0.8860048426150121, 'recall': 0.9401849948612538, 'f1': 0.912291199202194, 'number': 29190}
19
- - Ocation: {'precision': 0.8686381704207632, 'recall': 0.8152889539136796, 'f1': 0.841118472477534, 'number': 95690}
20
- - Rganization: {'precision': 0.7919078915181266, 'recall': 0.7449641777764141, 'f1': 0.7677190874452579, 'number': 65183}
21
- - Roduct: {'precision': 0.7065968977761166, 'recall': 0.8295304958315051, 'f1': 0.7631446160056513, 'number': 9116}
22
- - Rt: {'precision': 0.8407258064516129, 'recall': 0.8614333386302241, 'f1': 0.8509536143159878, 'number': 6293}
23
- - Ther: {'precision': 0.7303024586555996, 'recall': 0.8314124132006586, 'f1': 0.7775843599357258, 'number': 13969}
24
- - Uilding: {'precision': 0.5162234691388143, 'recall': 0.3648904983617865, 'f1': 0.4275611234592847, 'number': 5799}
25
- - Vent: {'precision': 0.605920892987139, 'recall': 0.35144264602392683, 'f1': 0.44486014608943525, 'number': 7105}
26
- - Overall Precision: 0.8203
27
- - Overall Recall: 0.7886
28
- - Overall F1: 0.8041
29
- - Overall Accuracy: 0.9498
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ## Model description
32
 
33
- More information needed
34
 
35
  ## Intended uses & limitations
36
 
37
- More information needed
38
 
39
  ## Training and evaluation data
40
 
41
- More information needed
42
 
43
  ## Training procedure
44
 
@@ -57,15 +97,14 @@ The following hyperparameters were used during training:
57
 
58
  ### Training results
59
 
60
- | Training Loss | Epoch | Step | Validation Loss | Erson | Ocation | Rganization | Roduct | Rt | Ther | Uilding | Vent | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
61
- |:-------------:|:-----:|:-----:|:---------------:|:----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
62
- | 0.1796 | 1.0 | 11293 | 0.1427 | {'precision': 0.8740795762821341, 'recall': 0.9272010962658445, 'f1': 0.8998570336137248, 'number': 29190} | {'precision': 0.8576076215009827, 'recall': 0.8071585327620441, 'f1': 0.8316186723086282, 'number': 95690} | {'precision': 0.7699032109387003, 'recall': 0.7688047497046776, 'f1': 0.7693535882339395, 'number': 65183} | {'precision': 0.6710836277974087, 'recall': 0.75, 'f1': 0.7083506009117282, 'number': 9116} | {'precision': 0.834716121685375, 'recall': 0.8153503893214683, 'f1': 0.8249196141479099, 'number': 6293} | {'precision': 0.6742843680056544, 'recall': 0.8195289569761615, 'f1': 0.7398455423789058, 'number': 13969} | {'precision': 0.4812014282713716, 'recall': 0.3950681151922745, 'f1': 0.4339015151515152, 'number': 5799} | {'precision': 0.5997923695821438, 'recall': 0.32526389866291344, 'f1': 0.4217922978645739, 'number': 7105} | 0.8000 | 0.7852 | 0.7925 | 0.9483 |
63
- | 0.1542 | 2.0 | 22586 | 0.1312 | {'precision': 0.8860048426150121, 'recall': 0.9401849948612538, 'f1': 0.912291199202194, 'number': 29190} | {'precision': 0.8686381704207632, 'recall': 0.8152889539136796, 'f1': 0.841118472477534, 'number': 95690} | {'precision': 0.7919078915181266, 'recall': 0.7449641777764141, 'f1': 0.7677190874452579, 'number': 65183} | {'precision': 0.7065968977761166, 'recall': 0.8295304958315051, 'f1': 0.7631446160056513, 'number': 9116} | {'precision': 0.8407258064516129, 'recall': 0.8614333386302241, 'f1': 0.8509536143159878, 'number': 6293} | {'precision': 0.7303024586555996, 'recall': 0.8314124132006586, 'f1': 0.7775843599357258, 'number': 13969} | {'precision': 0.5162234691388143, 'recall': 0.3648904983617865, 'f1': 0.4275611234592847, 'number': 5799} | {'precision': 0.605920892987139, 'recall': 0.35144264602392683, 'f1': 0.44486014608943525, 'number': 7105} | 0.8203 | 0.7886 | 0.8041 | 0.9498 |
64
-
65
 
66
  ### Framework versions
67
 
68
  - Transformers 4.30.2
69
  - Pytorch 2.0.1+cu118
70
  - Datasets 2.13.1
71
- - Tokenizers 0.13.3
 
5
  model-index:
6
  - name: bert-base-cased-finetuned-ner-DFKI-SLT_few-NERd
7
  results: []
8
+ language:
9
+ - en
10
+ metrics:
11
+ - seqeval
12
+ - f1
13
+ - accuracy
14
+ - recall
15
+ - precision
16
+ pipeline_tag: token-classification
17
  ---
18
 
 
 
 
19
  # bert-base-cased-finetuned-ner-DFKI-SLT_few-NERd
20
 
21
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased).
22
+
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.1312
25
+ - Person
26
+ - Precision: 0.8860048426150121
27
+ - Recall: 0.9401849948612538
28
+ - F1: 0.912291199202194
29
+ - Number': 29190
30
+ - Location
31
+ - Precision: 0.8686381704207632
32
+ - Recall: 0.8152889539136796
33
+ - F1: 0.841118472477534
34
+ - Number: 95690
35
+ - Organization
36
+ - Precision: 0.7919078915181266
37
+ - Recall': 0.7449641777764141
38
+ - F1': 0.7677190874452579
39
+ - Number': 65183
40
+ - Product
41
+ - Precision: 0.7065968977761166
42
+ - Recall: 0.8295304958315051
43
+ - F1: 0.7631446160056513
44
+ - Number: 9116
45
+ - Art
46
+ - Precision: 0.8407258064516129
47
+ - Recall: 0.8614333386302241
48
+ - F1: 0.8509536143159878
49
+ - Number: 6293
50
+ - Other
51
+ - Precision: 0.7303024586555996
52
+ - Recall: 0.8314124132006586
53
+ - F1: 0.7775843599357258
54
+ - Nnumber: 13969
55
+ - Building
56
+ - Precision: 0.5162234691388143
57
+ - Recall: 0.3648904983617865
58
+ - F1: 0.4275611234592847
59
+ - Number: 5799
60
+ - Event
61
+ - Precision: 0.605920892987139
62
+ - Recall: 0.35144264602392683
63
+ - F1: 0.44486014608943525
64
+ - Number': 7105
65
+ - Overall
66
+ - Precision: 0.8203
67
+ - Recall: 0.7886
68
+ - F1: 0.8041
69
+ - Accuracy: 0.9498
70
 
71
  ## Model description
72
 
73
+ For more information on how it was created, check out the following link:
74
 
75
  ## Intended uses & limitations
76
 
77
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
78
 
79
  ## Training and evaluation data
80
 
81
+ Dataset Source: https://huggingface.co/datasets/DFKI-SLT/few-nerd
82
 
83
  ## Training procedure
84
 
 
97
 
98
  ### Training results
99
 
100
+ | Training Loss | Epoch | Step | Validation Loss | Person Precision | Person Recall | Person F1 | Person Number | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Product Precision | Product Recall | Product F1 | Product Number | Art Precision | Art Recall | Art F1 | Art Number | Other Precision | Other Recall | Other F1 | Other Number | Building Precision | Building Recall | Building F1 | Building Number | Event Precision | Event Recall | Event F1 | Event Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
101
+ |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
102
+ | 0.1796 | 1.0 | 11293 | 0.1427 | 0.8741 | 0.9272 | 0.8999 | 29190 | 0.8576 | 0.8072 | 0.8316 | 95690 | 0.7699 | 0.7688 | 0.7694 | 65183 | 0.6711 | 0.75 | 0.7084 | 9116 | 0.8347 | 0.8154 | 0.8249 | 6293 | 0.6743 | 0.8195 | 0.7398 | 13969 | 0.4812 | 0.3951 | 0.4339 | 5799 | 0.5998 | 0.3253 | 0.4218 | 7105 | 0.8000 | 0.7852 | 0.7925 | 0.9483 |
103
+ | 0.1542 | 2.0 | 22586 | 0.1312 | 0.8860 | 0.9402 | 0.9123 | 29190 | 0.8686 | 0.8153 | 0.8411 | 95690 | 0.7919 | 0.7450 | 0.7677 | 65183 | 0.7066 | 0.8295 | 0.7631 | 9116 | 0.8407 | 0.8614 | 0.8510 | 6293 | 0.7303 | 0.8314 | 0.7776 | 13969 | 0.5162 | 0.3649 | 0.4276 | 5799 | 0.6059 | 0.3514 | 0.4449 | 7105 | 0.8203 | 0.7886 | 0.8041 | 0.9498 |
 
104
 
105
  ### Framework versions
106
 
107
  - Transformers 4.30.2
108
  - Pytorch 2.0.1+cu118
109
  - Datasets 2.13.1
110
+ - Tokenizers 0.13.3