helling100 commited on
Commit
9c2a6bc
1 Parent(s): 25b467e

Upload TFBertForSequenceClassification

Browse files
Files changed (3) hide show
  1. README.md +70 -0
  2. config.json +32 -0
  3. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_keras_callback
5
+ model-index:
6
+ - name: Regression_bert_1
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
+ probably proofread and complete it, then remove this comment. -->
12
+
13
+ # Regression_bert_1
14
+
15
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.3594
18
+ - Train Mae: 0.2822
19
+ - Train Mse: 0.1206
20
+ - Train R2-score: 0.6163
21
+ - Train Accuracy: 0.5308
22
+ - Validation Loss: 0.3503
23
+ - Validation Mae: 0.3488
24
+ - Validation Mse: 0.1574
25
+ - Validation R2-score: 0.8718
26
+ - Validation Accuracy: 0.2703
27
+ - Epoch: 9
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
47
+ - training_precision: float32
48
+
49
+ ### Training results
50
+
51
+ | Train Loss | Train Mae | Train Mse | Train R2-score | Train Accuracy | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Validation Accuracy | Epoch |
52
+ |:----------:|:---------:|:---------:|:--------------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-------------------:|:-----:|
53
+ | 0.4941 | 0.2941 | 0.1183 | -0.5444 | 0.5769 | 0.3126 | 0.3108 | 0.1099 | 0.8865 | 0.2703 | 0 |
54
+ | 0.4660 | 0.3256 | 0.1546 | 0.0002 | 0.5231 | 0.3682 | 0.3669 | 0.1835 | 0.8572 | 0.2703 | 1 |
55
+ | 0.4110 | 0.3178 | 0.1552 | 0.6834 | 0.5 | 0.4381 | 0.4369 | 0.2390 | 0.8207 | 0.2703 | 2 |
56
+ | 0.3886 | 0.3112 | 0.1560 | 0.7184 | 0.5231 | 0.3566 | 0.3552 | 0.1672 | 0.8661 | 0.2703 | 3 |
57
+ | 0.4055 | 0.2890 | 0.1248 | 0.7655 | 0.6077 | 0.4364 | 0.4353 | 0.2376 | 0.8218 | 0.2703 | 4 |
58
+ | 0.3955 | 0.2930 | 0.1272 | 0.7685 | 0.5538 | 0.3868 | 0.3855 | 0.1971 | 0.8489 | 0.2703 | 5 |
59
+ | 0.3949 | 0.3003 | 0.1386 | 0.3857 | 0.5154 | 0.3614 | 0.3600 | 0.1751 | 0.8620 | 0.2703 | 6 |
60
+ | 0.3390 | 0.2874 | 0.1306 | 0.7121 | 0.5231 | 0.3766 | 0.3753 | 0.1894 | 0.8542 | 0.2703 | 7 |
61
+ | 0.3556 | 0.2775 | 0.1190 | 0.7890 | 0.5231 | 0.3561 | 0.3547 | 0.1664 | 0.8667 | 0.2703 | 8 |
62
+ | 0.3594 | 0.2822 | 0.1206 | 0.6163 | 0.5308 | 0.3503 | 0.3488 | 0.1574 | 0.8718 | 0.2703 | 9 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.27.2
68
+ - TensorFlow 2.11.0
69
+ - Datasets 2.10.1
70
+ - Tokenizers 0.13.2
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-cased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "problem_type": "regression",
28
+ "transformers_version": "4.27.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 28996
32
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8a35631d786f0744012b2d5b183757de70a48fb8c9592a2e13782b81c345b6b
3
+ size 433532180