MugheesAwan11 commited on
Commit
efbc514
1 Parent(s): 1f960f1

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,590 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets: []
4
+ language:
5
+ - en
6
+ library_name: sentence-transformers
7
+ license: apache-2.0
8
+ metrics:
9
+ - cosine_accuracy@1
10
+ - cosine_accuracy@3
11
+ - cosine_accuracy@5
12
+ - cosine_accuracy@10
13
+ - cosine_precision@1
14
+ - cosine_precision@3
15
+ - cosine_precision@5
16
+ - cosine_precision@10
17
+ - cosine_recall@1
18
+ - cosine_recall@3
19
+ - cosine_recall@5
20
+ - cosine_recall@10
21
+ - cosine_ndcg@10
22
+ - cosine_ndcg@100
23
+ - cosine_mrr@10
24
+ - cosine_map@100
25
+ pipeline_tag: sentence-similarity
26
+ tags:
27
+ - sentence-transformers
28
+ - sentence-similarity
29
+ - feature-extraction
30
+ - generated_from_trainer
31
+ - dataset_size:6201
32
+ - loss:MatryoshkaLoss
33
+ - loss:MultipleNegativesRankingLoss
34
+ widget:
35
+ - source_sentence: ' entirety. This is a form of ownership that can only be created
36
+ by married persons. Both spouses hold title to the whole property with the right
37
+ of survivorship. When one spouse dies, the surviving spouse takes title to the
38
+ property. When the second spouse dies, the property is distributed to the heirs
39
+ according to the terms of the will. Tenants in Common. Jointly owned assets may
40
+ also be held as tenants in common. With this form of ownership, each owner holds
41
+ a share of the property, which may or may not be equal. When one owner dies, his
42
+ or her share passes immediately to that persons heirs, according to the laws in
43
+ each state. Bank accounts, securities accounts and certificates of deposit can
44
+ be set up as joint accounts, which may provide liquidity after your death. For
45
+ example, you could open a joint checking account, with right of YOUR LEGACY An
46
+ Estate-Planning Guide 13 survivorship, with one of your adult children. After
47
+ your death, the adult child would'
48
+ sentences:
49
+ - What determines the date of deposit?
50
+ - What are the advantages of shopping online and how can you find and compare products
51
+ easily?
52
+ - What are the different forms of ownership in real estate and how do they work?
53
+ - source_sentence: ' If you''re starting the new year with credit card debt, focus
54
+ on creating a plan for bringing the balances down. And remember to track your
55
+ progress so you have a motivational boost to stick with it. Why is a Good Credit
56
+ Score Important? A good credit score can open a variety of financial doors. Higher
57
+ credit scores can allow you to qualify for premium credit cards with better rewards
58
+ and perks. An excellent credit score can also help you qualify for certain loans
59
+ and mortgages, or even get better interest rates on the loans that you qualify
60
+ for. With poor or no credit history, many financial products may be unavailable.
61
+ But if you start implementing these keyways to improve your credit score, youll
62
+ be on track to a better credit score and all the benefits that come with it. Using
63
+ a Citi Secured Mastercard If youre just starting your credit journey, it may be
64
+ hard to see what credit products you can qualify for. A secured credit card like
65
+ the Citi Secured Mastercard is a great entry'
66
+ sentences:
67
+ - What are the benefits of having a good credit score?
68
+ - What is the purpose of the above information provided by Citi?
69
+ - When is the Best Time to Apply for a Credit Card?
70
+ - source_sentence: ' decreased rate of return on the reinvestment of the proceeds
71
+ received as a result of a payment on a Deposit prior to its scheduled maturity, payment
72
+ in cash of the Deposit principal prior to maturity in connection with the liquidation
73
+ of an insured institution or the assumption of all or a portion of its deposit
74
+ liabilities at a lower interest rate or its 29 receipt of a decreased rate of
75
+ return as compared to the return on the applicable securities, indices, currencies,
76
+ intangibles, articles, commodities or goods or any other economic measure or instrument,
77
+ including the occurrence or non-occurrence of any event. Preference in Right of
78
+ Payment Federal legislation adopted in 1993 provides for a preference in right
79
+ of payment of certain claims made in the liquidation or other resolution of any
80
+ FDIC-insured depository institution. The statute requires claims to be paid in
81
+ the following order: First, administrative expenses of the receiver; Second, any
82
+ deposit liability of the institution; Third, any other general or senior liability
83
+ of the'
84
+ sentences:
85
+ - How can I protect myself from fake Citi SMS texts and fraudulent money transfers?
86
+ - What are the details required to transfer funds out of my account and what are
87
+ the different types of payments available for transferring funds out of my account?
88
+ - What is the mechanism for decreased rate of return on reinvestment of the proceeds
89
+ received as a result of a payment on a Deposit prior to its scheduled maturity?
90
+ - source_sentence: ' Citigroup Inc. All rights reserved. Citi, Citi and Arc Design
91
+ and other marks used herein are service marks of Citigroup Inc. or its affliates,
92
+ used and registered throughout the world. 2164316 GTS26358 0223 Tips to Become
93
+ a Smart Credit Card User Citi.com - ATM Branch - Open an Account - Espaol !Citibank
94
+ LogoSearch!Search Citi.com Menu - Credit Cards - View All Credit Cards - 0 Intro
95
+ APR Credit Cards - Balance Transfer Credit Cards - Cash Back Credit Cards - Rewards
96
+ Credit Cards - See If You''re Pre-Selected - Small Business Credit Cards - Banking
97
+ - Banking Overview - Checking - Savings - Certificates of Deposit - Banking IRAs
98
+ - Rates - Small Business Banking - Lending - Personal Loans Lines of Credit -
99
+ Mortgage - Home Equity - Small Business Lending - Investing - Investing with Citi
100
+ - Self Directed Trading - Citigold - Credit Cards - Credit Knowledge Center -
101
+ Understanding Credit Cards - Tips'
102
+ sentences:
103
+ - What are the tips to become a smart credit card user?
104
+ - What information do we request and receive from you to explain transactions or
105
+ attempted transactions in or through your account?
106
+ - Who has permission from the primary cardholder to use the credit card account
107
+ and receive their own card with their own name?
108
+ - source_sentence: ' and Arc Design is a registered service mark of Citigroup Inc.
109
+ OpenInvestor is a service mark of Citigroup Inc. 1044398 GTS74053 0113 Trade Working
110
+ Capital Viewpoints Navigating global uncertainty: Perspectives on supporting the
111
+ healthcare supply chain November 2023 Treasury and Trade Solutions Foreword Foreword
112
+ Since the inception of the COVID-19 pandemic, the healthcare industry has faced
113
+ supply chain disruptions. The industry, which has a long tradition in innovation,
114
+ continues to transform to meet the needs of an evolving environment. Pauline kXXXXX
115
+ Unlocking the full potential within the healthcare industry Global Head, Trade
116
+ requires continuous investment. As corporates plan for the Working Capital Advisory
117
+ future, careful working capital management is essential to ensuring they get there.
118
+ Andrew Betts Global head of TTS Trade Sales Client Management, Citi Bayo Gbowu
119
+ Global Sector Lead, Trade Healthcare and Wellness Ian Kervick-Jimenez Trade Working
120
+ Capital Advisory 2 Treasury and Trade Solutions The Working'
121
+ sentences:
122
+ - How can I manage my Citibank accounts through International Personal Bank U.S.,
123
+ either via internet, text messages, or phone calls?
124
+ - What are the registered service marks of Citigroup Inc?
125
+ - What is the role of DXX jXXXX US Real Estate Total Return SM Index in determining,
126
+ composing or calculating products?
127
+ model-index:
128
+ - name: SentenceTransformer based on BAAI/bge-base-en-v1.5
129
+ results:
130
+ - task:
131
+ type: information-retrieval
132
+ name: Information Retrieval
133
+ dataset:
134
+ name: dim 768
135
+ type: dim_768
136
+ metrics:
137
+ - type: cosine_accuracy@1
138
+ value: 0.49420289855072463
139
+ name: Cosine Accuracy@1
140
+ - type: cosine_accuracy@3
141
+ value: 0.6768115942028986
142
+ name: Cosine Accuracy@3
143
+ - type: cosine_accuracy@5
144
+ value: 0.7478260869565218
145
+ name: Cosine Accuracy@5
146
+ - type: cosine_accuracy@10
147
+ value: 0.8333333333333334
148
+ name: Cosine Accuracy@10
149
+ - type: cosine_precision@1
150
+ value: 0.49420289855072463
151
+ name: Cosine Precision@1
152
+ - type: cosine_precision@3
153
+ value: 0.22560386473429955
154
+ name: Cosine Precision@3
155
+ - type: cosine_precision@5
156
+ value: 0.14956521739130432
157
+ name: Cosine Precision@5
158
+ - type: cosine_precision@10
159
+ value: 0.08333333333333333
160
+ name: Cosine Precision@10
161
+ - type: cosine_recall@1
162
+ value: 0.49420289855072463
163
+ name: Cosine Recall@1
164
+ - type: cosine_recall@3
165
+ value: 0.6768115942028986
166
+ name: Cosine Recall@3
167
+ - type: cosine_recall@5
168
+ value: 0.7478260869565218
169
+ name: Cosine Recall@5
170
+ - type: cosine_recall@10
171
+ value: 0.8333333333333334
172
+ name: Cosine Recall@10
173
+ - type: cosine_ndcg@10
174
+ value: 0.6585419708540992
175
+ name: Cosine Ndcg@10
176
+ - type: cosine_ndcg@100
177
+ value: 0.6900535995185644
178
+ name: Cosine Ndcg@100
179
+ - type: cosine_mrr@10
180
+ value: 0.6032240625718881
181
+ name: Cosine Mrr@10
182
+ - type: cosine_map@100
183
+ value: 0.6096261483024806
184
+ name: Cosine Map@100
185
+ ---
186
+
187
+ # SentenceTransformer based on BAAI/bge-base-en-v1.5
188
+
189
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
190
+
191
+ ## Model Details
192
+
193
+ ### Model Description
194
+ - **Model Type:** Sentence Transformer
195
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
196
+ - **Maximum Sequence Length:** 512 tokens
197
+ - **Output Dimensionality:** 768 tokens
198
+ - **Similarity Function:** Cosine Similarity
199
+ <!-- - **Training Dataset:** Unknown -->
200
+ - **Language:** en
201
+ - **License:** apache-2.0
202
+
203
+ ### Model Sources
204
+
205
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
206
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
207
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
208
+
209
+ ### Full Model Architecture
210
+
211
+ ```
212
+ SentenceTransformer(
213
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
214
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
215
+ (2): Normalize()
216
+ )
217
+ ```
218
+
219
+ ## Usage
220
+
221
+ ### Direct Usage (Sentence Transformers)
222
+
223
+ First install the Sentence Transformers library:
224
+
225
+ ```bash
226
+ pip install -U sentence-transformers
227
+ ```
228
+
229
+ Then you can load this model and run inference.
230
+ ```python
231
+ from sentence_transformers import SentenceTransformer
232
+
233
+ # Download from the 🤗 Hub
234
+ model = SentenceTransformer("MugheesAwan11/bge-base-citi-dataset-detailed-6k-0_5k-e2")
235
+ # Run inference
236
+ sentences = [
237
+ ' and Arc Design is a registered service mark of Citigroup Inc. OpenInvestor is a service mark of Citigroup Inc. 1044398 GTS74053 0113 Trade Working Capital Viewpoints Navigating global uncertainty: Perspectives on supporting the healthcare supply chain November 2023 Treasury and Trade Solutions Foreword Foreword Since the inception of the COVID-19 pandemic, the healthcare industry has faced supply chain disruptions. The industry, which has a long tradition in innovation, continues to transform to meet the needs of an evolving environment. Pauline kXXXXX Unlocking the full potential within the healthcare industry Global Head, Trade requires continuous investment. As corporates plan for the Working Capital Advisory future, careful working capital management is essential to ensuring they get there. Andrew Betts Global head of TTS Trade Sales Client Management, Citi Bayo Gbowu Global Sector Lead, Trade Healthcare and Wellness Ian Kervick-Jimenez Trade Working Capital Advisory 2 Treasury and Trade Solutions The Working',
238
+ 'What are the registered service marks of Citigroup Inc?',
239
+ 'What is the role of DXX jXXXX US Real Estate Total Return SM Index in determining, composing or calculating products?',
240
+ ]
241
+ embeddings = model.encode(sentences)
242
+ print(embeddings.shape)
243
+ # [3, 768]
244
+
245
+ # Get the similarity scores for the embeddings
246
+ similarities = model.similarity(embeddings, embeddings)
247
+ print(similarities.shape)
248
+ # [3, 3]
249
+ ```
250
+
251
+ <!--
252
+ ### Direct Usage (Transformers)
253
+
254
+ <details><summary>Click to see the direct usage in Transformers</summary>
255
+
256
+ </details>
257
+ -->
258
+
259
+ <!--
260
+ ### Downstream Usage (Sentence Transformers)
261
+
262
+ You can finetune this model on your own dataset.
263
+
264
+ <details><summary>Click to expand</summary>
265
+
266
+ </details>
267
+ -->
268
+
269
+ <!--
270
+ ### Out-of-Scope Use
271
+
272
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
273
+ -->
274
+
275
+ ## Evaluation
276
+
277
+ ### Metrics
278
+
279
+ #### Information Retrieval
280
+ * Dataset: `dim_768`
281
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
282
+
283
+ | Metric | Value |
284
+ |:--------------------|:-----------|
285
+ | cosine_accuracy@1 | 0.4942 |
286
+ | cosine_accuracy@3 | 0.6768 |
287
+ | cosine_accuracy@5 | 0.7478 |
288
+ | cosine_accuracy@10 | 0.8333 |
289
+ | cosine_precision@1 | 0.4942 |
290
+ | cosine_precision@3 | 0.2256 |
291
+ | cosine_precision@5 | 0.1496 |
292
+ | cosine_precision@10 | 0.0833 |
293
+ | cosine_recall@1 | 0.4942 |
294
+ | cosine_recall@3 | 0.6768 |
295
+ | cosine_recall@5 | 0.7478 |
296
+ | cosine_recall@10 | 0.8333 |
297
+ | cosine_ndcg@10 | 0.6585 |
298
+ | cosine_ndcg@100 | 0.6901 |
299
+ | cosine_mrr@10 | 0.6032 |
300
+ | **cosine_map@100** | **0.6096** |
301
+
302
+ <!--
303
+ ## Bias, Risks and Limitations
304
+
305
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
306
+ -->
307
+
308
+ <!--
309
+ ### Recommendations
310
+
311
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
312
+ -->
313
+
314
+ ## Training Details
315
+
316
+ ### Training Dataset
317
+
318
+ #### Unnamed Dataset
319
+
320
+
321
+ * Size: 6,201 training samples
322
+ * Columns: <code>positive</code> and <code>anchor</code>
323
+ * Approximate statistics based on the first 1000 samples:
324
+ | | positive | anchor |
325
+ |:--------|:--------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
326
+ | type | string | string |
327
+ | details | <ul><li>min: 146 tokens</li><li>mean: 205.96 tokens</li><li>max: 289 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 26.75 tokens</li><li>max: 241 tokens</li></ul> |
328
+ * Samples:
329
+ | positive | anchor |
330
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------|
331
+ | <code> combined balances do not include: balances in delinquent accounts; balances that exceed your approved credit When Deposits Are Credited to an Account limit for any line of credit or credit card; or outstanding balances Deposits received before the end of a Business Day will be credited to your account that day. However, there been established for the Citigold Account Package. Your may be a delay before these funds are available for your use. See combined monthly balance range will be determined by computing the Funds Availability at Citibank section of this Marketplace an average of your monthly balances for your linked accounts Addendum for more information. during the prior calendar month. Monthly service fees are applied only to accounts with a combined average monthly balance range under the specified limits starting two statement cycles after account opening. Service fees assessed will appear as a charge on your next statement. 2 3 Combined Average Monthly Non- Per Special Circumstances Monthly Balance Service Citibank Check If a checking account is converted</code> | <code>What are the conditions for balances to be included in the combined balances?</code> |
332
+ | <code> the first six months, your credit score may not be where you want it just yet. There are other factors that impact your credit score including the length of your credit file, your credit mix and your credit utilization. If youre hoping to repair a credit score that has been damaged by financial setbacks, the timelines can be longer. A year or two with regular, timely payments and good credit utilization can push your credit score up. However, bankruptcies, collections, and late payments can linger on your credit report for anywhere from seven to ten years. That said, you may not have to use a secured credit card throughout your entire credit building process. Your goal may be to repair your credit to the point where your credit score is good enough to make you eligible for an unsecured credit card. To that end, youll need to research factors such as any fees that apply to the unsecured credit cards youre considering. There is no quick fix to having a great credit score. Building good credit with a</code> | <code>What factors impact your credit score including the length of your credit file, your credit mix, and your credit utilization?</code> |
333
+ | <code> by the index sponsor of the Constituents when it calculated the hypothetical back-tested index levels for the Constituents. It is impossible to predict whether the Index will rise or fall. The actual future performance of the Index may bear no relation to the historical or hypothetical back-tested levels of the Index. The Index Administrator, which is our Affiliate, and the Index Calculation Agent May Exercise Judgments under Certain Circumstances in the Calculation of the Index. Although the Index is rules- based, there are certain circumstances under which the Index Administrator or Index Calculation Agent may be required to exercise judgment in calculating the Index, including the following: The Index Administrator will determine whether an ambiguity, error or omission has arisen and the Index Administrator may resolve such ambiguity, error or omission, acting in good faith and in a commercially reasonable manner, and may amend the Index Rules to reflect the resolution of the ambiguity, error or omission in a manner that is consistent with the commercial objective of the Index. The Index Calculation Agents calculations</code> | <code>What circumstances may require the Index Administrator or Index Calculation Agent to exercise judgment in calculating the Index?</code> |
334
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
335
+ ```json
336
+ {
337
+ "loss": "MultipleNegativesRankingLoss",
338
+ "matryoshka_dims": [
339
+ 768
340
+ ],
341
+ "matryoshka_weights": [
342
+ 1
343
+ ],
344
+ "n_dims_per_step": -1
345
+ }
346
+ ```
347
+
348
+ ### Training Hyperparameters
349
+ #### Non-Default Hyperparameters
350
+
351
+ - `eval_strategy`: epoch
352
+ - `per_device_train_batch_size`: 32
353
+ - `per_device_eval_batch_size`: 16
354
+ - `learning_rate`: 2e-05
355
+ - `num_train_epochs`: 2
356
+ - `lr_scheduler_type`: cosine
357
+ - `warmup_ratio`: 0.1
358
+ - `bf16`: True
359
+ - `tf32`: True
360
+ - `load_best_model_at_end`: True
361
+ - `optim`: adamw_torch_fused
362
+ - `batch_sampler`: no_duplicates
363
+
364
+ #### All Hyperparameters
365
+ <details><summary>Click to expand</summary>
366
+
367
+ - `overwrite_output_dir`: False
368
+ - `do_predict`: False
369
+ - `eval_strategy`: epoch
370
+ - `prediction_loss_only`: True
371
+ - `per_device_train_batch_size`: 32
372
+ - `per_device_eval_batch_size`: 16
373
+ - `per_gpu_train_batch_size`: None
374
+ - `per_gpu_eval_batch_size`: None
375
+ - `gradient_accumulation_steps`: 1
376
+ - `eval_accumulation_steps`: None
377
+ - `learning_rate`: 2e-05
378
+ - `weight_decay`: 0.0
379
+ - `adam_beta1`: 0.9
380
+ - `adam_beta2`: 0.999
381
+ - `adam_epsilon`: 1e-08
382
+ - `max_grad_norm`: 1.0
383
+ - `num_train_epochs`: 2
384
+ - `max_steps`: -1
385
+ - `lr_scheduler_type`: cosine
386
+ - `lr_scheduler_kwargs`: {}
387
+ - `warmup_ratio`: 0.1
388
+ - `warmup_steps`: 0
389
+ - `log_level`: passive
390
+ - `log_level_replica`: warning
391
+ - `log_on_each_node`: True
392
+ - `logging_nan_inf_filter`: True
393
+ - `save_safetensors`: True
394
+ - `save_on_each_node`: False
395
+ - `save_only_model`: False
396
+ - `restore_callback_states_from_checkpoint`: False
397
+ - `no_cuda`: False
398
+ - `use_cpu`: False
399
+ - `use_mps_device`: False
400
+ - `seed`: 42
401
+ - `data_seed`: None
402
+ - `jit_mode_eval`: False
403
+ - `use_ipex`: False
404
+ - `bf16`: True
405
+ - `fp16`: False
406
+ - `fp16_opt_level`: O1
407
+ - `half_precision_backend`: auto
408
+ - `bf16_full_eval`: False
409
+ - `fp16_full_eval`: False
410
+ - `tf32`: True
411
+ - `local_rank`: 0
412
+ - `ddp_backend`: None
413
+ - `tpu_num_cores`: None
414
+ - `tpu_metrics_debug`: False
415
+ - `debug`: []
416
+ - `dataloader_drop_last`: False
417
+ - `dataloader_num_workers`: 0
418
+ - `dataloader_prefetch_factor`: None
419
+ - `past_index`: -1
420
+ - `disable_tqdm`: False
421
+ - `remove_unused_columns`: True
422
+ - `label_names`: None
423
+ - `load_best_model_at_end`: True
424
+ - `ignore_data_skip`: False
425
+ - `fsdp`: []
426
+ - `fsdp_min_num_params`: 0
427
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
428
+ - `fsdp_transformer_layer_cls_to_wrap`: None
429
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
430
+ - `deepspeed`: None
431
+ - `label_smoothing_factor`: 0.0
432
+ - `optim`: adamw_torch_fused
433
+ - `optim_args`: None
434
+ - `adafactor`: False
435
+ - `group_by_length`: False
436
+ - `length_column_name`: length
437
+ - `ddp_find_unused_parameters`: None
438
+ - `ddp_bucket_cap_mb`: None
439
+ - `ddp_broadcast_buffers`: False
440
+ - `dataloader_pin_memory`: True
441
+ - `dataloader_persistent_workers`: False
442
+ - `skip_memory_metrics`: True
443
+ - `use_legacy_prediction_loop`: False
444
+ - `push_to_hub`: False
445
+ - `resume_from_checkpoint`: None
446
+ - `hub_model_id`: None
447
+ - `hub_strategy`: every_save
448
+ - `hub_private_repo`: False
449
+ - `hub_always_push`: False
450
+ - `gradient_checkpointing`: False
451
+ - `gradient_checkpointing_kwargs`: None
452
+ - `include_inputs_for_metrics`: False
453
+ - `eval_do_concat_batches`: True
454
+ - `fp16_backend`: auto
455
+ - `push_to_hub_model_id`: None
456
+ - `push_to_hub_organization`: None
457
+ - `mp_parameters`:
458
+ - `auto_find_batch_size`: False
459
+ - `full_determinism`: False
460
+ - `torchdynamo`: None
461
+ - `ray_scope`: last
462
+ - `ddp_timeout`: 1800
463
+ - `torch_compile`: False
464
+ - `torch_compile_backend`: None
465
+ - `torch_compile_mode`: None
466
+ - `dispatch_batches`: None
467
+ - `split_batches`: None
468
+ - `include_tokens_per_second`: False
469
+ - `include_num_input_tokens_seen`: False
470
+ - `neftune_noise_alpha`: None
471
+ - `optim_target_modules`: None
472
+ - `batch_eval_metrics`: False
473
+ - `batch_sampler`: no_duplicates
474
+ - `multi_dataset_batch_sampler`: proportional
475
+
476
+ </details>
477
+
478
+ ### Training Logs
479
+ | Epoch | Step | Training Loss | dim_768_cosine_map@100 |
480
+ |:-------:|:-------:|:-------------:|:----------------------:|
481
+ | 0.0515 | 10 | 0.7623 | - |
482
+ | 0.1031 | 20 | 0.6475 | - |
483
+ | 0.1546 | 30 | 0.4492 | - |
484
+ | 0.2062 | 40 | 0.3238 | - |
485
+ | 0.2577 | 50 | 0.2331 | - |
486
+ | 0.3093 | 60 | 0.2575 | - |
487
+ | 0.3608 | 70 | 0.3619 | - |
488
+ | 0.4124 | 80 | 0.1539 | - |
489
+ | 0.4639 | 90 | 0.1937 | - |
490
+ | 0.5155 | 100 | 0.241 | - |
491
+ | 0.5670 | 110 | 0.2192 | - |
492
+ | 0.6186 | 120 | 0.2553 | - |
493
+ | 0.6701 | 130 | 0.2438 | - |
494
+ | 0.7216 | 140 | 0.1916 | - |
495
+ | 0.7732 | 150 | 0.189 | - |
496
+ | 0.8247 | 160 | 0.1721 | - |
497
+ | 0.8763 | 170 | 0.2353 | - |
498
+ | 0.9278 | 180 | 0.1713 | - |
499
+ | 0.9794 | 190 | 0.2121 | - |
500
+ | 1.0 | 194 | - | 0.6100 |
501
+ | 1.0309 | 200 | 0.1394 | - |
502
+ | 1.0825 | 210 | 0.156 | - |
503
+ | 1.1340 | 220 | 0.1276 | - |
504
+ | 1.1856 | 230 | 0.0969 | - |
505
+ | 1.2371 | 240 | 0.0811 | - |
506
+ | 1.2887 | 250 | 0.0699 | - |
507
+ | 1.3402 | 260 | 0.0924 | - |
508
+ | 1.3918 | 270 | 0.0838 | - |
509
+ | 1.4433 | 280 | 0.064 | - |
510
+ | 1.4948 | 290 | 0.0624 | - |
511
+ | 1.5464 | 300 | 0.0837 | - |
512
+ | 1.5979 | 310 | 0.0881 | - |
513
+ | 1.6495 | 320 | 0.1065 | - |
514
+ | 1.7010 | 330 | 0.0646 | - |
515
+ | 1.7526 | 340 | 0.084 | - |
516
+ | 1.8041 | 350 | 0.0697 | - |
517
+ | 1.8557 | 360 | 0.0888 | - |
518
+ | 1.9072 | 370 | 0.0873 | - |
519
+ | 1.9588 | 380 | 0.0755 | - |
520
+ | **2.0** | **388** | **-** | **0.6096** |
521
+
522
+ * The bold row denotes the saved checkpoint.
523
+
524
+ ### Framework Versions
525
+ - Python: 3.10.14
526
+ - Sentence Transformers: 3.0.1
527
+ - Transformers: 4.41.2
528
+ - PyTorch: 2.1.2+cu121
529
+ - Accelerate: 0.32.1
530
+ - Datasets: 2.19.1
531
+ - Tokenizers: 0.19.1
532
+
533
+ ## Citation
534
+
535
+ ### BibTeX
536
+
537
+ #### Sentence Transformers
538
+ ```bibtex
539
+ @inproceedings{reimers-2019-sentence-bert,
540
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
541
+ author = "Reimers, Nils and Gurevych, Iryna",
542
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
543
+ month = "11",
544
+ year = "2019",
545
+ publisher = "Association for Computational Linguistics",
546
+ url = "https://arxiv.org/abs/1908.10084",
547
+ }
548
+ ```
549
+
550
+ #### MatryoshkaLoss
551
+ ```bibtex
552
+ @misc{kusupati2024matryoshka,
553
+ title={Matryoshka Representation Learning},
554
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
555
+ year={2024},
556
+ eprint={2205.13147},
557
+ archivePrefix={arXiv},
558
+ primaryClass={cs.LG}
559
+ }
560
+ ```
561
+
562
+ #### MultipleNegativesRankingLoss
563
+ ```bibtex
564
+ @misc{henderson2017efficient,
565
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
566
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
567
+ year={2017},
568
+ eprint={1705.00652},
569
+ archivePrefix={arXiv},
570
+ primaryClass={cs.CL}
571
+ }
572
+ ```
573
+
574
+ <!--
575
+ ## Glossary
576
+
577
+ *Clearly define terms in order to be accessible across audiences.*
578
+ -->
579
+
580
+ <!--
581
+ ## Model Card Authors
582
+
583
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
584
+ -->
585
+
586
+ <!--
587
+ ## Model Card Contact
588
+
589
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
590
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f12891cb43fcad269b5773fd2c285fcbb6774436e5ecb8ac0cb0ae5b44d2b274
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff