helling100 commited on
Commit
d9dbb04
1 Parent(s): 9c2a6bc

Upload TFBertForSequenceClassification

Browse files
Files changed (3) hide show
  1. README.md +21 -21
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -14,15 +14,15 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 0.3594
18
- - Train Mae: 0.2822
19
- - Train Mse: 0.1206
20
- - Train R2-score: 0.6163
21
- - Train Accuracy: 0.5308
22
- - Validation Loss: 0.3503
23
- - Validation Mae: 0.3488
24
- - Validation Mse: 0.1574
25
- - Validation R2-score: 0.8718
26
  - Validation Accuracy: 0.2703
27
  - Epoch: 9
28
 
@@ -43,28 +43,28 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
47
  - training_precision: float32
48
 
49
  ### Training results
50
 
51
  | Train Loss | Train Mae | Train Mse | Train R2-score | Train Accuracy | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Validation Accuracy | Epoch |
52
  |:----------:|:---------:|:---------:|:--------------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-------------------:|:-----:|
53
- | 0.4941 | 0.2941 | 0.1183 | -0.5444 | 0.5769 | 0.3126 | 0.3108 | 0.1099 | 0.8865 | 0.2703 | 0 |
54
- | 0.4660 | 0.3256 | 0.1546 | 0.0002 | 0.5231 | 0.3682 | 0.3669 | 0.1835 | 0.8572 | 0.2703 | 1 |
55
- | 0.4110 | 0.3178 | 0.1552 | 0.6834 | 0.5 | 0.4381 | 0.4369 | 0.2390 | 0.8207 | 0.2703 | 2 |
56
- | 0.3886 | 0.3112 | 0.1560 | 0.7184 | 0.5231 | 0.3566 | 0.3552 | 0.1672 | 0.8661 | 0.2703 | 3 |
57
- | 0.4055 | 0.2890 | 0.1248 | 0.7655 | 0.6077 | 0.4364 | 0.4353 | 0.2376 | 0.8218 | 0.2703 | 4 |
58
- | 0.3955 | 0.2930 | 0.1272 | 0.7685 | 0.5538 | 0.3868 | 0.3855 | 0.1971 | 0.8489 | 0.2703 | 5 |
59
- | 0.3949 | 0.3003 | 0.1386 | 0.3857 | 0.5154 | 0.3614 | 0.3600 | 0.1751 | 0.8620 | 0.2703 | 6 |
60
- | 0.3390 | 0.2874 | 0.1306 | 0.7121 | 0.5231 | 0.3766 | 0.3753 | 0.1894 | 0.8542 | 0.2703 | 7 |
61
- | 0.3556 | 0.2775 | 0.1190 | 0.7890 | 0.5231 | 0.3561 | 0.3547 | 0.1664 | 0.8667 | 0.2703 | 8 |
62
- | 0.3594 | 0.2822 | 0.1206 | 0.6163 | 0.5308 | 0.3503 | 0.3488 | 0.1574 | 0.8718 | 0.2703 | 9 |
63
 
64
 
65
  ### Framework versions
66
 
67
- - Transformers 4.27.2
68
  - TensorFlow 2.11.0
69
  - Datasets 2.10.1
70
  - Tokenizers 0.13.2
 
14
 
15
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.4893
18
+ - Train Mae: 0.3046
19
+ - Train Mse: 0.1329
20
+ - Train R2-score: 0.7411
21
+ - Train Accuracy: 0.5615
22
+ - Validation Loss: 0.3333
23
+ - Validation Mae: 0.3317
24
+ - Validation Mse: 0.1327
25
+ - Validation R2-score: 0.8816
26
  - Validation Accuracy: 0.2703
27
  - Epoch: 9
28
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.0002, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
47
  - training_precision: float32
48
 
49
  ### Training results
50
 
51
  | Train Loss | Train Mae | Train Mse | Train R2-score | Train Accuracy | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Validation Accuracy | Epoch |
52
  |:----------:|:---------:|:---------:|:--------------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-------------------:|:-----:|
53
+ | 0.4663 | 0.3382 | 0.1856 | 0.6803 | 0.5231 | 0.3295 | 0.3278 | 0.1280 | 0.8831 | 0.2703 | 0 |
54
+ | 0.4395 | 0.3199 | 0.1515 | 0.7074 | 0.5462 | 0.3503 | 0.3489 | 0.1568 | 0.8714 | 0.2703 | 1 |
55
+ | 0.5018 | 0.3120 | 0.1403 | 0.6400 | 0.5462 | 0.2535 | 0.2512 | 0.0837 | 0.8559 | 0.6757 | 2 |
56
+ | 0.5881 | 0.3602 | 0.1923 | 0.3795 | 0.4538 | 0.4672 | 0.4662 | 0.2662 | 0.8014 | 0.2703 | 3 |
57
+ | 0.4176 | 0.3324 | 0.1672 | 0.7333 | 0.5154 | 0.3716 | 0.3703 | 0.1859 | 0.8555 | 0.2703 | 4 |
58
+ | 0.4885 | 0.3136 | 0.1458 | 0.6244 | 0.6231 | 0.5000 | 0.4991 | 0.2992 | 0.7767 | 0.2703 | 5 |
59
+ | 0.4312 | 0.2927 | 0.1279 | 0.5553 | 0.5846 | 0.3057 | 0.3039 | 0.1040 | 0.8864 | 0.2703 | 6 |
60
+ | 0.5013 | 0.3242 | 0.1486 | 0.6517 | 0.5 | 0.4892 | 0.4882 | 0.2879 | 0.7852 | 0.2703 | 7 |
61
+ | 0.4581 | 0.3535 | 0.1797 | 0.5703 | 0.4846 | 0.3611 | 0.3597 | 0.1744 | 0.8621 | 0.2703 | 8 |
62
+ | 0.4893 | 0.3046 | 0.1329 | 0.7411 | 0.5615 | 0.3333 | 0.3317 | 0.1327 | 0.8816 | 0.2703 | 9 |
63
 
64
 
65
  ### Framework versions
66
 
67
+ - Transformers 4.27.3
68
  - TensorFlow 2.11.0
69
  - Datasets 2.10.1
70
  - Tokenizers 0.13.2
config.json CHANGED
@@ -25,7 +25,7 @@
25
  "pad_token_id": 0,
26
  "position_embedding_type": "absolute",
27
  "problem_type": "regression",
28
- "transformers_version": "4.27.2",
29
  "type_vocab_size": 2,
30
  "use_cache": true,
31
  "vocab_size": 28996
 
25
  "pad_token_id": 0,
26
  "position_embedding_type": "absolute",
27
  "problem_type": "regression",
28
+ "transformers_version": "4.27.3",
29
  "type_vocab_size": 2,
30
  "use_cache": true,
31
  "vocab_size": 28996
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b8a35631d786f0744012b2d5b183757de70a48fb8c9592a2e13782b81c345b6b
3
  size 433532180
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19949039e0755ae907f37444bf432613e5c98d8bcdd51cc314a5ce1555bfd9ad
3
  size 433532180