amichailidis commited on
Commit
222e4b3
1 Parent(s): 2d0cb9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -86
README.md CHANGED
@@ -1,86 +1,78 @@
1
- {\rtf1\ansi\ansicpg1252\cocoartf2639
2
- \cocoatextscaling0\cocoaplatform0{\fonttbl\f0\fswiss\fcharset0 Helvetica;}
3
- {\colortbl;\red255\green255\blue255;}
4
- {\*\expandedcolortbl;;}
5
- \paperw11900\paperh16840\margl1440\margr1440\vieww20940\viewh15640\viewkind0
6
- \pard\tx566\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\pardirnatural\partightenfactor0
7
-
8
- \f0\fs24 \cf0 ---\
9
- tags:\
10
- - generated_from_trainer\
11
- metrics:\
12
- - precision\
13
- - recall\
14
- - f1\
15
- - accuracy\
16
- model-index:\
17
- - name: bert-base-greek-uncased-v1-finetuned-ner\
18
- results: []\
19
- ---\
20
- \
21
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You\
22
- should probably proofread and complete it, then remove this comment. -->\
23
- \
24
- #bert-base-greek-uncased-v1-finetuned-ner\
25
- \
26
- This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1]({\field{\*\fldinst{HYPERLINK "https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1"}}{\fldrslt https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1}}) on an unknown dataset.\
27
- It achieves the following results on the evaluation set:\
28
- - Loss: 0.1052\
29
- - Precision: 0.8440\
30
- - Recall: 0.8566\
31
- - F1: 0.8503\
32
- - Accuracy: 0.9768\
33
- \
34
- ## Model description\
35
- \
36
- More information needed\
37
- \
38
- ## Intended uses & limitations\
39
- \
40
- More information needed\
41
- \
42
- ## Training and evaluation data\
43
- \
44
- More information needed\
45
- \
46
- ## Training procedure\
47
- \
48
- ### Training hyperparameters\
49
- \
50
- The following hyperparameters were used during training:\
51
- - learning_rate: 2e-05\
52
- - train_batch_size: 16\
53
- - eval_batch_size: 16\
54
- - seed: 42\
55
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\
56
- - lr_scheduler_type: linear\
57
- - num_epochs: 10\
58
- \
59
- ### Training results\
60
- \
61
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\
62
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\
63
- | No log | 0.68 | 250 | 0.0913 | 0.7814 | 0.8596 | 0.8187 | 0.9717 |\
64
- | 0.1136 | 1.29 | 500 | 0.0823 | 0.7940 | 0.8738 | 0.8320 | 0.9731 |\
65
- | 0.1136 | 1.93 | 750 | 0.0810 | 0.8057 | 0.8645 | 0.8341 | 0.9737 |\
66
- | 0.0521 | 2.58 | 1000 | 0.0855 | 0.8244 | 0.8610 | 0.8423 | 0.9752 |\
67
- | 0.0521 | 3.22 | 1250 | 0.0926 | 0.8329 | 0.8627 | 0.8476 | 0.9762 |\
68
- | 0.0352 | 3.87 | 1500 | 0.0869 | 0.8286 | 0.8614 | 0.8447 | 0.9755 |\
69
- | 0.0352 | 4.51 | 1750 | 0.0950 | 0.8446 | 0.8528 | 0.8487 | 0.9751 |\
70
- | 0.023 | 5.15 | 2000 | 0.1052 | 0.8381 | 0.8586 | 0.8483 | 0.9759 |\
71
- | 0.023 | 5.8 | 2250 | 0.1049 | 0.8291 | 0.8614 | 0.8449 | 0.9758 |\
72
- | 0.0158 | 6.44 | 2500 | 0.1158 | 0.8189 | 0.8727 | 0.8450 | 0.9758 |\
73
- | 0.0158 | 7.09 | 2750 | 0.1248 | 0.8270 | 0.8648 | 0.8455 | 0.9757 |\
74
- | 0.0126 | 7.73 | 3000 | 0.1287 | 0.8363 | 0.8610 | 0.8485 | 0.9758 |\
75
- | 0.0126 | 8.38 | 3250 | 0.1325 | 0.8247 | 0.8707 | 0.8471 | 0.9753 |\
76
- | 0.0089 | 9.02 | 3500 | 0.1342 | 0.8316 | 0.8559 | 0.8435 | 0.9757 |\
77
- | 0.0089 | 9.66 | 3750 | 0.1355 | 0.8293 | 0.8638 | 0.8462 | 0.9759 |\
78
- \
79
- \
80
- ### Framework versions\
81
- \
82
- - Transformers 4.22.0\
83
- - Pytorch 1.12.1+cu113\
84
- - Datasets 2.4.0\
85
- - Tokenizers 0.12.1\
86
- }
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ metrics:
5
+ - precision
6
+ - recall
7
+ - f1
8
+ - accuracy
9
+ model-index:
10
+ - name: bert-base-greek-uncased-v1-finetuned-ner
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # bert-base-greek-uncased-v1-finetuned-ner
18
+
19
+ This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.1052
22
+ - Precision: 0.8440
23
+ - Recall: 0.8566
24
+ - F1: 0.8503
25
+ - Accuracy: 0.9768
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 2e-05
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 10
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
56
+ | No log | 0.64 | 250 | 0.0913 | 0.7814 | 0.8596 | 0.8187 | 0.9717 |
57
+ | 0.1136 | 1.29 | 500 | 0.0823 | 0.7940 | 0.8738 | 0.8320 | 0.9731 |
58
+ | 0.1136 | 1.93 | 750 | 0.0812 | 0.8057 | 0.8645 | 0.8341 | 0.9737 |
59
+ | 0.0521 | 2.58 | 1000 | 0.0855 | 0.8244 | 0.8610 | 0.8423 | 0.9752 |
60
+ | 0.0521 | 3.22 | 1250 | 0.0926 | 0.8329 | 0.8627 | 0.8476 | 0.9762 |
61
+ | 0.0352 | 3.87 | 1500 | 0.0869 | 0.8256 | 0.8633 | 0.8440 | 0.9774 |
62
+ | 0.0352 | 4.51 | 1750 | 0.1049 | 0.8290 | 0.8528 | 0.8487 | 0.9751 |
63
+ | 0.023 | 5.15 | 2000 | 0.1093 | 0.8440 | 0.8528 | 0.8487 | 0.9751 |
64
+ | 0.023 | 5.8 | 2250 | 0.1172 | 0.8301 | 0.8586 | 0.8483 | 0.9759 |
65
+ | 0.0158 | 6.44 | 2500 | 0.1273 | 0.8238 | 0.8614 | 0.8449 | 0.9758 |
66
+ | 0.0158 | 7.09 | 2750 | 0.1246 | 0.8350 | 0.8727 | 0.8450 | 0.9758 |
67
+ | 0.0126 | 7.73 | 3000 | 0.1262 | 0.8333 | 0.8648 | 0.8455 | 0.9757 |
68
+ | 0.0126 | 8.38 | 3250 | 0.1347 | 0.8319 | 0.8610 | 0.8485 | 0.9758 |
69
+ | 0.0089 | 9.02 | 3500 | 0.1325 | 0.8376 | 0.8707 | 0.8471 | 0.9753 |
70
+ | 0.0089 | 9.66 | 3750 | 0.1362 | 0.8371 | 0.8559 | 0.8435 | 0.9757 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.22.0
76
+ - Pytorch 1.12.1+cu113
77
+ - Datasets 2.4.0
78
+ - Tokenizers 0.12.1