cfchu commited on
Commit
44296f0
·
verified ·
1 Parent(s): dda9bfb

Upload 29 files

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,3 +1,141 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: sentence-transformers
3
+ pipeline_tag: sentence-similarity
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ ---
9
+
10
+ # SentenceTransformer
11
+
12
+ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
13
+
14
+ ## Model Details
15
+
16
+ ### Model Description
17
+ - **Model Type:** Sentence Transformer
18
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
19
+ - **Maximum Sequence Length:** 512 tokens
20
+ - **Output Dimensionality:** 1024 tokens
21
+ - **Similarity Function:** Cosine Similarity
22
+ <!-- - **Training Dataset:** Unknown -->
23
+ <!-- - **Language:** Unknown -->
24
+ <!-- - **License:** Unknown -->
25
+
26
+ ### Model Sources
27
+
28
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
29
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
30
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
31
+
32
+ ### Full Model Architecture
33
+
34
+ ```
35
+ SentenceTransformer(
36
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
37
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
38
+ (2): Normalize()
39
+ )
40
+ ```
41
+
42
+ ## Usage
43
+
44
+ ### Direct Usage (Sentence Transformers)
45
+
46
+ First install the Sentence Transformers library:
47
+
48
+ ```bash
49
+ pip install -U sentence-transformers
50
+ ```
51
+
52
+ Then you can load this model and run inference.
53
+ ```python
54
+ from sentence_transformers import SentenceTransformer
55
+
56
+ # Download from the 🤗 Hub
57
+ model = SentenceTransformer("sentence_transformers_model_id")
58
+ # Run inference
59
+ sentences = [
60
+ 'The weather is lovely today.',
61
+ "It's so sunny outside!",
62
+ 'He drove to the stadium.',
63
+ ]
64
+ embeddings = model.encode(sentences)
65
+ print(embeddings.shape)
66
+ # [3, 1024]
67
+
68
+ # Get the similarity scores for the embeddings
69
+ similarities = model.similarity(embeddings, embeddings)
70
+ print(similarities.shape)
71
+ # [3, 3]
72
+ ```
73
+
74
+ <!--
75
+ ### Direct Usage (Transformers)
76
+
77
+ <details><summary>Click to see the direct usage in Transformers</summary>
78
+
79
+ </details>
80
+ -->
81
+
82
+ <!--
83
+ ### Downstream Usage (Sentence Transformers)
84
+
85
+ You can finetune this model on your own dataset.
86
+
87
+ <details><summary>Click to expand</summary>
88
+
89
+ </details>
90
+ -->
91
+
92
+ <!--
93
+ ### Out-of-Scope Use
94
+
95
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
96
+ -->
97
+
98
+ <!--
99
+ ## Bias, Risks and Limitations
100
+
101
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
102
+ -->
103
+
104
+ <!--
105
+ ### Recommendations
106
+
107
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
108
+ -->
109
+
110
+ ## Training Details
111
+
112
+ ### Framework Versions
113
+ - Python: 3.10.12
114
+ - Sentence Transformers: 3.2.1
115
+ - Transformers: 4.44.2
116
+ - PyTorch: 2.5.0+cu121
117
+ - Accelerate: 0.34.2
118
+ - Datasets: 3.0.2
119
+ - Tokenizers: 0.19.1
120
+
121
+ ## Citation
122
+
123
+ ### BibTeX
124
+
125
+ <!--
126
+ ## Glossary
127
+
128
+ *Clearly define terms in order to be accessible across audiences.*
129
+ -->
130
+
131
+ <!--
132
+ ## Model Card Authors
133
+
134
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
135
+ -->
136
+
137
+ <!--
138
+ ## Model Card Contact
139
+
140
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
141
+ -->
checkpoint-124/1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
checkpoint-124/README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: sentence-transformers
3
+ pipeline_tag: sentence-similarity
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ ---
9
+
10
+ # SentenceTransformer
11
+
12
+ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
13
+
14
+ ## Model Details
15
+
16
+ ### Model Description
17
+ - **Model Type:** Sentence Transformer
18
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
19
+ - **Maximum Sequence Length:** 512 tokens
20
+ - **Output Dimensionality:** 1024 tokens
21
+ - **Similarity Function:** Cosine Similarity
22
+ <!-- - **Training Dataset:** Unknown -->
23
+ <!-- - **Language:** Unknown -->
24
+ <!-- - **License:** Unknown -->
25
+
26
+ ### Model Sources
27
+
28
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
29
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
30
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
31
+
32
+ ### Full Model Architecture
33
+
34
+ ```
35
+ SentenceTransformer(
36
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
37
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
38
+ (2): Normalize()
39
+ )
40
+ ```
41
+
42
+ ## Usage
43
+
44
+ ### Direct Usage (Sentence Transformers)
45
+
46
+ First install the Sentence Transformers library:
47
+
48
+ ```bash
49
+ pip install -U sentence-transformers
50
+ ```
51
+
52
+ Then you can load this model and run inference.
53
+ ```python
54
+ from sentence_transformers import SentenceTransformer
55
+
56
+ # Download from the 🤗 Hub
57
+ model = SentenceTransformer("sentence_transformers_model_id")
58
+ # Run inference
59
+ sentences = [
60
+ 'The weather is lovely today.',
61
+ "It's so sunny outside!",
62
+ 'He drove to the stadium.',
63
+ ]
64
+ embeddings = model.encode(sentences)
65
+ print(embeddings.shape)
66
+ # [3, 1024]
67
+
68
+ # Get the similarity scores for the embeddings
69
+ similarities = model.similarity(embeddings, embeddings)
70
+ print(similarities.shape)
71
+ # [3, 3]
72
+ ```
73
+
74
+ <!--
75
+ ### Direct Usage (Transformers)
76
+
77
+ <details><summary>Click to see the direct usage in Transformers</summary>
78
+
79
+ </details>
80
+ -->
81
+
82
+ <!--
83
+ ### Downstream Usage (Sentence Transformers)
84
+
85
+ You can finetune this model on your own dataset.
86
+
87
+ <details><summary>Click to expand</summary>
88
+
89
+ </details>
90
+ -->
91
+
92
+ <!--
93
+ ### Out-of-Scope Use
94
+
95
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
96
+ -->
97
+
98
+ <!--
99
+ ## Bias, Risks and Limitations
100
+
101
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
102
+ -->
103
+
104
+ <!--
105
+ ### Recommendations
106
+
107
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
108
+ -->
109
+
110
+ ## Training Details
111
+
112
+ ### Framework Versions
113
+ - Python: 3.10.12
114
+ - Sentence Transformers: 3.2.1
115
+ - Transformers: 4.44.2
116
+ - PyTorch: 2.5.0+cu121
117
+ - Accelerate: 0.34.2
118
+ - Datasets: 3.0.2
119
+ - Tokenizers: 0.19.1
120
+
121
+ ## Citation
122
+
123
+ ### BibTeX
124
+
125
+ <!--
126
+ ## Glossary
127
+
128
+ *Clearly define terms in order to be accessible across audiences.*
129
+ -->
130
+
131
+ <!--
132
+ ## Model Card Authors
133
+
134
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
135
+ -->
136
+
137
+ <!--
138
+ ## Model Card Contact
139
+
140
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
141
+ -->
checkpoint-124/config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./fine_tuned_model/checkpoint-124",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "directionality": "bidi",
10
+ "eos_token_id": 2,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 1024,
14
+ "id2label": {
15
+ "0": "LABEL_0"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 4096,
19
+ "label2id": {
20
+ "LABEL_0": 0
21
+ },
22
+ "layer_norm_eps": 1e-12,
23
+ "max_position_embeddings": 512,
24
+ "model_type": "bert",
25
+ "num_attention_heads": 16,
26
+ "num_hidden_layers": 24,
27
+ "output_past": true,
28
+ "pad_token_id": 0,
29
+ "pooler_fc_size": 768,
30
+ "pooler_num_attention_heads": 12,
31
+ "pooler_num_fc_layers": 3,
32
+ "pooler_size_per_head": 128,
33
+ "pooler_type": "first_token_transform",
34
+ "position_embedding_type": "absolute",
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.44.2",
37
+ "type_vocab_size": 2,
38
+ "use_cache": true,
39
+ "vocab_size": 21128
40
+ }
checkpoint-124/config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.2.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.5.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
checkpoint-124/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50dbf90f3647e9e723ffabaa9042c6ae20823a2f48ad9a588387524662d11fcc
3
+ size 1302134568
checkpoint-124/modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
checkpoint-124/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56608c0731d9f676646eafd3841b2a7cc0d49ab8744064e60cb095c385a2d6bf
3
+ size 2596108193
checkpoint-124/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef647197f5e3cd812295d125459012d44bb0330343628a7ac257fa0efec47f8c
3
+ size 14244
checkpoint-124/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c89d33f6859ae4d9b15634c388e4894d26989214415b78e1a5bc8db23b4e905
3
+ size 1064
checkpoint-124/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
checkpoint-124/special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
checkpoint-124/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-124/tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 1000000000000000019884624838656,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
checkpoint-124/trainer_state.json ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.0,
5
+ "eval_steps": 500,
6
+ "global_step": 124,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.16129032258064516,
13
+ "grad_norm": NaN,
14
+ "learning_rate": 9.516129032258065e-06,
15
+ "loss": 1.6042,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.3225806451612903,
20
+ "grad_norm": 0.0,
21
+ "learning_rate": 8.790322580645163e-06,
22
+ "loss": 0.3049,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.4838709677419355,
27
+ "grad_norm": 0.11385558545589447,
28
+ "learning_rate": 7.983870967741935e-06,
29
+ "loss": 0.001,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.6451612903225806,
34
+ "grad_norm": 0.0,
35
+ "learning_rate": 7.177419354838711e-06,
36
+ "loss": 0.0,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.8064516129032258,
41
+ "grad_norm": 1.6232772281910002e-07,
42
+ "learning_rate": 6.370967741935485e-06,
43
+ "loss": 0.0042,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.967741935483871,
48
+ "grad_norm": 3.805576298532287e-09,
49
+ "learning_rate": 5.564516129032258e-06,
50
+ "loss": 0.0009,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 1.129032258064516,
55
+ "grad_norm": 3.958922523139563e-09,
56
+ "learning_rate": 4.758064516129033e-06,
57
+ "loss": 0.0741,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 1.2903225806451613,
62
+ "grad_norm": NaN,
63
+ "learning_rate": 4.032258064516129e-06,
64
+ "loss": 0.8227,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 1.4516129032258065,
69
+ "grad_norm": 24.948503494262695,
70
+ "learning_rate": 3.225806451612903e-06,
71
+ "loss": 0.3625,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 1.6129032258064515,
76
+ "grad_norm": 0.0,
77
+ "learning_rate": 2.4193548387096776e-06,
78
+ "loss": 0.0,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 1.7741935483870968,
83
+ "grad_norm": 0.0,
84
+ "learning_rate": 1.6129032258064516e-06,
85
+ "loss": 0.0392,
86
+ "step": 110
87
+ },
88
+ {
89
+ "epoch": 1.935483870967742,
90
+ "grad_norm": 1.1323093573878396e-09,
91
+ "learning_rate": 8.064516129032258e-07,
92
+ "loss": 0.0051,
93
+ "step": 120
94
+ }
95
+ ],
96
+ "logging_steps": 10,
97
+ "max_steps": 124,
98
+ "num_input_tokens_seen": 0,
99
+ "num_train_epochs": 2,
100
+ "save_steps": 1000,
101
+ "stateful_callbacks": {
102
+ "TrainerControl": {
103
+ "args": {
104
+ "should_epoch_stop": false,
105
+ "should_evaluate": false,
106
+ "should_log": false,
107
+ "should_save": true,
108
+ "should_training_stop": true
109
+ },
110
+ "attributes": {}
111
+ }
112
+ },
113
+ "total_flos": 0.0,
114
+ "train_batch_size": 1,
115
+ "trial_name": null,
116
+ "trial_params": null
117
+ }
checkpoint-124/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2e6a5bfc2b77f2ba2c6c0d386ff32581f703fe3365a2a06d802f52ac35ca4f2
3
+ size 5432
checkpoint-124/vocab.txt ADDED
The diff for this file is too large to render. See raw diff
 
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./fine_tuned_model",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "directionality": "bidi",
10
+ "eos_token_id": 2,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 1024,
14
+ "id2label": {
15
+ "0": "LABEL_0"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 4096,
19
+ "label2id": {
20
+ "LABEL_0": 0
21
+ },
22
+ "layer_norm_eps": 1e-12,
23
+ "max_position_embeddings": 512,
24
+ "model_type": "bert",
25
+ "num_attention_heads": 16,
26
+ "num_hidden_layers": 24,
27
+ "output_past": true,
28
+ "pad_token_id": 0,
29
+ "pooler_fc_size": 768,
30
+ "pooler_num_attention_heads": 12,
31
+ "pooler_num_fc_layers": 3,
32
+ "pooler_size_per_head": 128,
33
+ "pooler_type": "first_token_transform",
34
+ "position_embedding_type": "absolute",
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.44.2",
37
+ "type_vocab_size": 2,
38
+ "use_cache": true,
39
+ "vocab_size": 21128
40
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.2.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.5.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50dbf90f3647e9e723ffabaa9042c6ae20823a2f48ad9a588387524662d11fcc
3
+ size 1302134568
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
runs/Oct25_03-30-48_af12ca4ffe6c/events.out.tfevents.1729827065.af12ca4ffe6c.1239.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b05ce445eec67c4d8fb5f23de4a049c7f70cdb33c8853cef5d3e8dac82b97314
3
+ size 8244
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 1000000000000000019884624838656,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2e6a5bfc2b77f2ba2c6c0d386ff32581f703fe3365a2a06d802f52ac35ca4f2
3
+ size 5432
vocab.txt ADDED
The diff for this file is too large to render. See raw diff