pabRomero commited on
Commit
1e84bf0
1 Parent(s): 6d7ca82

Training complete

Browse files
Files changed (1) hide show
  1. README.md +19 -19
README.md CHANGED
@@ -21,11 +21,11 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.1150
25
- - Precision: 0.6869
26
- - Recall: 0.7076
27
- - F1: 0.6971
28
- - Accuracy: 0.9677
29
 
30
  ## Model description
31
 
@@ -44,12 +44,12 @@ More information needed
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
- - learning_rate: 0.1
48
- - train_batch_size: 128
49
- - eval_batch_size: 128
50
  - seed: 42
51
  - gradient_accumulation_steps: 4
52
- - total_train_batch_size: 512
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: cosine_with_restarts
55
  - lr_scheduler_warmup_ratio: 0.05
@@ -60,16 +60,16 @@ The following hyperparameters were used during training:
60
 
61
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
62
  |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
63
- | No log | 0.9655 | 14 | 0.3729 | 0.4205 | 0.6119 | 0.4985 | 0.9430 |
64
- | No log | 2.0 | 29 | 0.2544 | 0.5272 | 0.6683 | 0.5894 | 0.9574 |
65
- | No log | 2.9655 | 43 | 0.2117 | 0.5702 | 0.6884 | 0.6238 | 0.9604 |
66
- | No log | 4.0 | 58 | 0.1747 | 0.5934 | 0.7001 | 0.6424 | 0.9628 |
67
- | No log | 4.9655 | 72 | 0.1420 | 0.6280 | 0.6827 | 0.6542 | 0.9642 |
68
- | No log | 6.0 | 87 | 0.1287 | 0.6639 | 0.7033 | 0.6830 | 0.9667 |
69
- | No log | 6.9655 | 101 | 0.1309 | 0.6471 | 0.7009 | 0.6729 | 0.9654 |
70
- | No log | 8.0 | 116 | 0.1260 | 0.6349 | 0.7199 | 0.6748 | 0.9652 |
71
- | No log | 8.9655 | 130 | 0.1159 | 0.6621 | 0.7118 | 0.6860 | 0.9670 |
72
- | No log | 9.6552 | 140 | 0.1150 | 0.6869 | 0.7076 | 0.6971 | 0.9677 |
73
 
74
 
75
  ### Framework versions
 
21
 
22
  This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.1228
25
+ - Precision: 0.6701
26
+ - Recall: 0.6809
27
+ - F1: 0.6754
28
+ - Accuracy: 0.9657
29
 
30
  ## Model description
31
 
 
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
+ - learning_rate: 0.01
48
+ - train_batch_size: 512
49
+ - eval_batch_size: 512
50
  - seed: 42
51
  - gradient_accumulation_steps: 4
52
+ - total_train_batch_size: 2048
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: cosine_with_restarts
55
  - lr_scheduler_warmup_ratio: 0.05
 
60
 
61
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
62
  |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
63
+ | No log | 0.9697 | 16 | 0.2938 | 0.4425 | 0.5130 | 0.4751 | 0.9361 |
64
+ | No log | 2.0 | 33 | 0.1815 | 0.5546 | 0.5873 | 0.5705 | 0.9535 |
65
+ | No log | 2.9697 | 49 | 0.1617 | 0.5838 | 0.6189 | 0.6008 | 0.9575 |
66
+ | No log | 4.0 | 66 | 0.1482 | 0.6070 | 0.6396 | 0.6229 | 0.9602 |
67
+ | No log | 4.9697 | 82 | 0.1340 | 0.6465 | 0.6563 | 0.6513 | 0.9633 |
68
+ | No log | 6.0 | 99 | 0.1306 | 0.6561 | 0.6638 | 0.6599 | 0.9641 |
69
+ | No log | 6.9697 | 115 | 0.1290 | 0.6569 | 0.6705 | 0.6636 | 0.9645 |
70
+ | No log | 8.0 | 132 | 0.1246 | 0.6664 | 0.6794 | 0.6728 | 0.9654 |
71
+ | No log | 8.9697 | 148 | 0.1230 | 0.6699 | 0.6793 | 0.6745 | 0.9656 |
72
+ | No log | 9.6970 | 160 | 0.1228 | 0.6701 | 0.6809 | 0.6754 | 0.9657 |
73
 
74
 
75
  ### Framework versions