EvgeniaKomleva commited on
Commit
4776d3b
·
1 Parent(s): a236d51

Model save

Browse files
Files changed (1) hide show
  1. README.md +46 -12
README.md CHANGED
@@ -5,9 +5,36 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - plod-filtered
 
 
 
 
 
8
  model-index:
9
  - name: roberta-large-finetuned-abbr-finetuned-ner
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,16 +44,11 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [surrey-nlp/roberta-large-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) on the plod-filtered dataset.
19
  It achieves the following results on the evaluation set:
20
- - eval_loss: 0.0988
21
- - eval_precision: 0.9704
22
- - eval_recall: 0.9689
23
- - eval_f1: 0.9697
24
- - eval_accuracy: 0.9665
25
- - eval_runtime: 204.5482
26
- - eval_samples_per_second: 118.016
27
- - eval_steps_per_second: 29.504
28
- - epoch: 2.72
29
- - step: 76484
30
 
31
  ## Model description
32
 
@@ -46,13 +68,25 @@ More information needed
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 2e-05
49
- - train_batch_size: 4
50
  - eval_batch_size: 4
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - num_epochs: 6
55
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ### Framework versions
57
 
58
  - Transformers 4.35.2
 
5
  - generated_from_trainer
6
  datasets:
7
  - plod-filtered
8
+ metrics:
9
+ - precision
10
+ - recall
11
+ - f1
12
+ - accuracy
13
  model-index:
14
  - name: roberta-large-finetuned-abbr-finetuned-ner
15
+ results:
16
+ - task:
17
+ name: Token Classification
18
+ type: token-classification
19
+ dataset:
20
+ name: plod-filtered
21
+ type: plod-filtered
22
+ config: PLODfiltered
23
+ split: validation
24
+ args: PLODfiltered
25
+ metrics:
26
+ - name: Precision
27
+ type: precision
28
+ value: 0.9800350338833268
29
+ - name: Recall
30
+ type: recall
31
+ value: 0.9766733969309696
32
+ - name: F1
33
+ type: f1
34
+ value: 0.9783513277508114
35
+ - name: Accuracy
36
+ type: accuracy
37
+ value: 0.9761728475392376
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
44
 
45
  This model is a fine-tuned version of [surrey-nlp/roberta-large-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) on the plod-filtered dataset.
46
  It achieves the following results on the evaluation set:
47
+ - Loss: 0.0913
48
+ - Precision: 0.9800
49
+ - Recall: 0.9767
50
+ - F1: 0.9784
51
+ - Accuracy: 0.9762
 
 
 
 
 
52
 
53
  ## Model description
54
 
 
68
 
69
  The following hyperparameters were used during training:
70
  - learning_rate: 2e-05
71
+ - train_batch_size: 16
72
  - eval_batch_size: 4
73
  - seed: 42
74
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
  - lr_scheduler_type: linear
76
  - num_epochs: 6
77
 
78
+ ### Training results
79
+
80
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
+ |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
+ | 0.0805 | 0.99 | 7000 | 0.0761 | 0.9762 | 0.9722 | 0.9742 | 0.9720 |
83
+ | 0.0655 | 1.99 | 14000 | 0.0682 | 0.9769 | 0.9748 | 0.9759 | 0.9735 |
84
+ | 0.0469 | 2.98 | 21000 | 0.0718 | 0.9787 | 0.9746 | 0.9767 | 0.9744 |
85
+ | 0.0336 | 3.98 | 28000 | 0.0851 | 0.9800 | 0.9753 | 0.9776 | 0.9753 |
86
+ | 0.0259 | 4.97 | 35000 | 0.0913 | 0.9800 | 0.9767 | 0.9784 | 0.9762 |
87
+ | 0.0197 | 5.97 | 42000 | 0.0948 | 0.9801 | 0.9774 | 0.9787 | 0.9766 |
88
+
89
+
90
  ### Framework versions
91
 
92
  - Transformers 4.35.2