pabRomero commited on
Commit
de46d23
1 Parent(s): 56771f5

Training complete

Browse files
Files changed (1) hide show
  1. README.md +26 -18
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  library_name: transformers
3
- base_model: allenai/biomed_roberta_base
 
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -18,13 +19,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # BioMedRoBERTa-finetuned-ner-pablo-just-classifier
20
 
21
- This model is a fine-tuned version of [allenai/biomed_roberta_base](https://huggingface.co/allenai/biomed_roberta_base) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.1276
24
- - Precision: 0.6818
25
- - Recall: 0.7031
26
- - F1: 0.6923
27
- - Accuracy: 0.9672
28
 
29
  ## Model description
30
 
@@ -44,24 +45,31 @@ More information needed
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 0.1
47
- - train_batch_size: 2
48
- - eval_batch_size: 2
49
  - seed: 42
 
 
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_ratio: 0.05
53
- - num_epochs: 5
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
59
- |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
60
- | 0.7191 | 1.0 | 2509 | 0.5248 | 0.5127 | 0.6334 | 0.5667 | 0.9486 |
61
- | 0.5382 | 2.0 | 5018 | 0.4280 | 0.5378 | 0.6500 | 0.5886 | 0.9556 |
62
- | 0.3968 | 3.0 | 7527 | 0.3095 | 0.4997 | 0.6714 | 0.5730 | 0.9531 |
63
- | 0.2528 | 4.0 | 10036 | 0.1872 | 0.5631 | 0.6850 | 0.6181 | 0.9599 |
64
- | 0.1541 | 5.0 | 12545 | 0.1276 | 0.6818 | 0.7031 | 0.6923 | 0.9672 |
 
 
 
 
 
65
 
66
 
67
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ base_model: emilyalsentzer/Bio_ClinicalBERT
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
19
 
20
  # BioMedRoBERTa-finetuned-ner-pablo-just-classifier
21
 
22
+ This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.1150
25
+ - Precision: 0.6869
26
+ - Recall: 0.7076
27
+ - F1: 0.6971
28
+ - Accuracy: 0.9677
29
 
30
  ## Model description
31
 
 
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 0.1
48
+ - train_batch_size: 128
49
+ - eval_batch_size: 128
50
  - seed: 42
51
+ - gradient_accumulation_steps: 4
52
+ - total_train_batch_size: 512
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: cosine_with_restarts
55
  - lr_scheduler_warmup_ratio: 0.05
56
+ - num_epochs: 10
57
  - mixed_precision_training: Native AMP
58
 
59
  ### Training results
60
 
61
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
62
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
63
+ | No log | 0.9655 | 14 | 0.3729 | 0.4205 | 0.6119 | 0.4985 | 0.9430 |
64
+ | No log | 2.0 | 29 | 0.2544 | 0.5272 | 0.6683 | 0.5894 | 0.9574 |
65
+ | No log | 2.9655 | 43 | 0.2117 | 0.5702 | 0.6884 | 0.6238 | 0.9604 |
66
+ | No log | 4.0 | 58 | 0.1747 | 0.5934 | 0.7001 | 0.6424 | 0.9628 |
67
+ | No log | 4.9655 | 72 | 0.1420 | 0.6280 | 0.6827 | 0.6542 | 0.9642 |
68
+ | No log | 6.0 | 87 | 0.1287 | 0.6639 | 0.7033 | 0.6830 | 0.9667 |
69
+ | No log | 6.9655 | 101 | 0.1309 | 0.6471 | 0.7009 | 0.6729 | 0.9654 |
70
+ | No log | 8.0 | 116 | 0.1260 | 0.6349 | 0.7199 | 0.6748 | 0.9652 |
71
+ | No log | 8.9655 | 130 | 0.1159 | 0.6621 | 0.7118 | 0.6860 | 0.9670 |
72
+ | No log | 9.6552 | 140 | 0.1150 | 0.6869 | 0.7076 | 0.6971 | 0.9677 |
73
 
74
 
75
  ### Framework versions