revtestuser commited on
Commit
711b252
1 Parent(s): df924ad

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,807 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ language:
4
+ - en
5
+ library_name: sentence-transformers
6
+ license: apache-2.0
7
+ metrics:
8
+ - cosine_accuracy@1
9
+ - cosine_accuracy@3
10
+ - cosine_accuracy@5
11
+ - cosine_accuracy@10
12
+ - cosine_precision@1
13
+ - cosine_precision@3
14
+ - cosine_precision@5
15
+ - cosine_precision@10
16
+ - cosine_recall@1
17
+ - cosine_recall@3
18
+ - cosine_recall@5
19
+ - cosine_recall@10
20
+ - cosine_ndcg@10
21
+ - cosine_mrr@10
22
+ - cosine_map@100
23
+ pipeline_tag: sentence-similarity
24
+ tags:
25
+ - sentence-transformers
26
+ - sentence-similarity
27
+ - feature-extraction
28
+ - generated_from_trainer
29
+ - dataset_size:6300
30
+ - loss:MatryoshkaLoss
31
+ - loss:MultipleNegativesRankingLoss
32
+ widget:
33
+ - source_sentence: Chevron regularly conducts employee surveys throughout the year
34
+ to assess the health of the company’s culture, allowing them to gain insights
35
+ into employee well-being.
36
+ sentences:
37
+ - What was the net cash provided by operating activities for the year ended December
38
+ 31, 2023?
39
+ - How often does Chevron conduct employee surveys to assess the health of its culture?
40
+ - What were the total future minimum lease payments for Comcast's operating leases
41
+ as of December 31, 2023?
42
+ - source_sentence: Gross margin for the fiscal year decreased 250 basis points to
43
+ 43.5% primarily driven by higher product costs, higher markdowns and unfavorable
44
+ changes in foreign currency exchange rates, partially offset by strategic pricing
45
+ actions.
46
+ sentences:
47
+ - How does the company maintain high standards of product quality and safety?
48
+ - What were the main factors that negatively impacted NIKE's gross margin in fiscal
49
+ 2023?
50
+ - What was the growth rate of Visa Inc.'s commercial payments volume internationally
51
+ between 2021 and 2022?
52
+ - source_sentence: Mr. Teter holds a B.S. degree in Mechanical Engineering from the
53
+ University of California at Davis and a J.D. degree from Stanford Law School.
54
+ sentences:
55
+ - What degrees does Timothy S. Teter hold and from which institutions?
56
+ - What regulations are in place in Europe regarding interactions between pharmaceutical
57
+ companies and physicians?
58
+ - What economic factors particularly affected Garmin's consumer behavior in 2023?
59
+ - source_sentence: Our Office of Diversity, Equity and Inclusion supports our focus
60
+ on associate diversity, supplier diversity, and engagement with our communities.
61
+ sentences:
62
+ - What are the three segments of alcohol ready-to-drink beverages the company is
63
+ focusing on?
64
+ - How much net cash was provided by operating activities in 2023?
65
+ - What is the focus of The Home Depot's Office of Diversity, Equity and Inclusion?
66
+ - source_sentence: Net cash used in financing activities totaled $2,614 in 2023, compared
67
+ to $4,283 in 2022.
68
+ sentences:
69
+ - What was the net cash used in financing activities in 2023 and how does it compare
70
+ to 2022?
71
+ - What are Chipotle's key strategies for business growth as discussed in their strategy?
72
+ - What are the primary regulatory authorities that supervise and regulate JPMorgan
73
+ Chase in the U.S.?
74
+ model-index:
75
+ - name: BGE base Financial Matryoshka
76
+ results:
77
+ - task:
78
+ type: information-retrieval
79
+ name: Information Retrieval
80
+ dataset:
81
+ name: dim 768
82
+ type: dim_768
83
+ metrics:
84
+ - type: cosine_accuracy@1
85
+ value: 0.6971428571428572
86
+ name: Cosine Accuracy@1
87
+ - type: cosine_accuracy@3
88
+ value: 0.82
89
+ name: Cosine Accuracy@3
90
+ - type: cosine_accuracy@5
91
+ value: 0.8685714285714285
92
+ name: Cosine Accuracy@5
93
+ - type: cosine_accuracy@10
94
+ value: 0.9057142857142857
95
+ name: Cosine Accuracy@10
96
+ - type: cosine_precision@1
97
+ value: 0.6971428571428572
98
+ name: Cosine Precision@1
99
+ - type: cosine_precision@3
100
+ value: 0.2733333333333333
101
+ name: Cosine Precision@3
102
+ - type: cosine_precision@5
103
+ value: 0.1737142857142857
104
+ name: Cosine Precision@5
105
+ - type: cosine_precision@10
106
+ value: 0.09057142857142855
107
+ name: Cosine Precision@10
108
+ - type: cosine_recall@1
109
+ value: 0.6971428571428572
110
+ name: Cosine Recall@1
111
+ - type: cosine_recall@3
112
+ value: 0.82
113
+ name: Cosine Recall@3
114
+ - type: cosine_recall@5
115
+ value: 0.8685714285714285
116
+ name: Cosine Recall@5
117
+ - type: cosine_recall@10
118
+ value: 0.9057142857142857
119
+ name: Cosine Recall@10
120
+ - type: cosine_ndcg@10
121
+ value: 0.803607128355984
122
+ name: Cosine Ndcg@10
123
+ - type: cosine_mrr@10
124
+ value: 0.770687641723356
125
+ name: Cosine Mrr@10
126
+ - type: cosine_map@100
127
+ value: 0.77485834386751
128
+ name: Cosine Map@100
129
+ - task:
130
+ type: information-retrieval
131
+ name: Information Retrieval
132
+ dataset:
133
+ name: dim 512
134
+ type: dim_512
135
+ metrics:
136
+ - type: cosine_accuracy@1
137
+ value: 0.6957142857142857
138
+ name: Cosine Accuracy@1
139
+ - type: cosine_accuracy@3
140
+ value: 0.8228571428571428
141
+ name: Cosine Accuracy@3
142
+ - type: cosine_accuracy@5
143
+ value: 0.8642857142857143
144
+ name: Cosine Accuracy@5
145
+ - type: cosine_accuracy@10
146
+ value: 0.9042857142857142
147
+ name: Cosine Accuracy@10
148
+ - type: cosine_precision@1
149
+ value: 0.6957142857142857
150
+ name: Cosine Precision@1
151
+ - type: cosine_precision@3
152
+ value: 0.2742857142857143
153
+ name: Cosine Precision@3
154
+ - type: cosine_precision@5
155
+ value: 0.17285714285714285
156
+ name: Cosine Precision@5
157
+ - type: cosine_precision@10
158
+ value: 0.0904285714285714
159
+ name: Cosine Precision@10
160
+ - type: cosine_recall@1
161
+ value: 0.6957142857142857
162
+ name: Cosine Recall@1
163
+ - type: cosine_recall@3
164
+ value: 0.8228571428571428
165
+ name: Cosine Recall@3
166
+ - type: cosine_recall@5
167
+ value: 0.8642857142857143
168
+ name: Cosine Recall@5
169
+ - type: cosine_recall@10
170
+ value: 0.9042857142857142
171
+ name: Cosine Recall@10
172
+ - type: cosine_ndcg@10
173
+ value: 0.802840202489837
174
+ name: Cosine Ndcg@10
175
+ - type: cosine_mrr@10
176
+ value: 0.7701360544217687
177
+ name: Cosine Mrr@10
178
+ - type: cosine_map@100
179
+ value: 0.7744106258164117
180
+ name: Cosine Map@100
181
+ - task:
182
+ type: information-retrieval
183
+ name: Information Retrieval
184
+ dataset:
185
+ name: dim 256
186
+ type: dim_256
187
+ metrics:
188
+ - type: cosine_accuracy@1
189
+ value: 0.6871428571428572
190
+ name: Cosine Accuracy@1
191
+ - type: cosine_accuracy@3
192
+ value: 0.8185714285714286
193
+ name: Cosine Accuracy@3
194
+ - type: cosine_accuracy@5
195
+ value: 0.8528571428571429
196
+ name: Cosine Accuracy@5
197
+ - type: cosine_accuracy@10
198
+ value: 0.8985714285714286
199
+ name: Cosine Accuracy@10
200
+ - type: cosine_precision@1
201
+ value: 0.6871428571428572
202
+ name: Cosine Precision@1
203
+ - type: cosine_precision@3
204
+ value: 0.27285714285714285
205
+ name: Cosine Precision@3
206
+ - type: cosine_precision@5
207
+ value: 0.17057142857142854
208
+ name: Cosine Precision@5
209
+ - type: cosine_precision@10
210
+ value: 0.08985714285714284
211
+ name: Cosine Precision@10
212
+ - type: cosine_recall@1
213
+ value: 0.6871428571428572
214
+ name: Cosine Recall@1
215
+ - type: cosine_recall@3
216
+ value: 0.8185714285714286
217
+ name: Cosine Recall@3
218
+ - type: cosine_recall@5
219
+ value: 0.8528571428571429
220
+ name: Cosine Recall@5
221
+ - type: cosine_recall@10
222
+ value: 0.8985714285714286
223
+ name: Cosine Recall@10
224
+ - type: cosine_ndcg@10
225
+ value: 0.795190594370522
226
+ name: Cosine Ndcg@10
227
+ - type: cosine_mrr@10
228
+ value: 0.7619773242630383
229
+ name: Cosine Mrr@10
230
+ - type: cosine_map@100
231
+ value: 0.7664081914180308
232
+ name: Cosine Map@100
233
+ - task:
234
+ type: information-retrieval
235
+ name: Information Retrieval
236
+ dataset:
237
+ name: dim 128
238
+ type: dim_128
239
+ metrics:
240
+ - type: cosine_accuracy@1
241
+ value: 0.6685714285714286
242
+ name: Cosine Accuracy@1
243
+ - type: cosine_accuracy@3
244
+ value: 0.8128571428571428
245
+ name: Cosine Accuracy@3
246
+ - type: cosine_accuracy@5
247
+ value: 0.8428571428571429
248
+ name: Cosine Accuracy@5
249
+ - type: cosine_accuracy@10
250
+ value: 0.8942857142857142
251
+ name: Cosine Accuracy@10
252
+ - type: cosine_precision@1
253
+ value: 0.6685714285714286
254
+ name: Cosine Precision@1
255
+ - type: cosine_precision@3
256
+ value: 0.27095238095238094
257
+ name: Cosine Precision@3
258
+ - type: cosine_precision@5
259
+ value: 0.16857142857142854
260
+ name: Cosine Precision@5
261
+ - type: cosine_precision@10
262
+ value: 0.08942857142857143
263
+ name: Cosine Precision@10
264
+ - type: cosine_recall@1
265
+ value: 0.6685714285714286
266
+ name: Cosine Recall@1
267
+ - type: cosine_recall@3
268
+ value: 0.8128571428571428
269
+ name: Cosine Recall@3
270
+ - type: cosine_recall@5
271
+ value: 0.8428571428571429
272
+ name: Cosine Recall@5
273
+ - type: cosine_recall@10
274
+ value: 0.8942857142857142
275
+ name: Cosine Recall@10
276
+ - type: cosine_ndcg@10
277
+ value: 0.7840862792892018
278
+ name: Cosine Ndcg@10
279
+ - type: cosine_mrr@10
280
+ value: 0.7486655328798184
281
+ name: Cosine Mrr@10
282
+ - type: cosine_map@100
283
+ value: 0.7527149388922518
284
+ name: Cosine Map@100
285
+ - task:
286
+ type: information-retrieval
287
+ name: Information Retrieval
288
+ dataset:
289
+ name: dim 64
290
+ type: dim_64
291
+ metrics:
292
+ - type: cosine_accuracy@1
293
+ value: 0.6471428571428571
294
+ name: Cosine Accuracy@1
295
+ - type: cosine_accuracy@3
296
+ value: 0.7828571428571428
297
+ name: Cosine Accuracy@3
298
+ - type: cosine_accuracy@5
299
+ value: 0.8242857142857143
300
+ name: Cosine Accuracy@5
301
+ - type: cosine_accuracy@10
302
+ value: 0.8685714285714285
303
+ name: Cosine Accuracy@10
304
+ - type: cosine_precision@1
305
+ value: 0.6471428571428571
306
+ name: Cosine Precision@1
307
+ - type: cosine_precision@3
308
+ value: 0.26095238095238094
309
+ name: Cosine Precision@3
310
+ - type: cosine_precision@5
311
+ value: 0.16485714285714284
312
+ name: Cosine Precision@5
313
+ - type: cosine_precision@10
314
+ value: 0.08685714285714284
315
+ name: Cosine Precision@10
316
+ - type: cosine_recall@1
317
+ value: 0.6471428571428571
318
+ name: Cosine Recall@1
319
+ - type: cosine_recall@3
320
+ value: 0.7828571428571428
321
+ name: Cosine Recall@3
322
+ - type: cosine_recall@5
323
+ value: 0.8242857142857143
324
+ name: Cosine Recall@5
325
+ - type: cosine_recall@10
326
+ value: 0.8685714285714285
327
+ name: Cosine Recall@10
328
+ - type: cosine_ndcg@10
329
+ value: 0.7601900384958588
330
+ name: Cosine Ndcg@10
331
+ - type: cosine_mrr@10
332
+ value: 0.725268707482993
333
+ name: Cosine Mrr@10
334
+ - type: cosine_map@100
335
+ value: 0.7302983967510448
336
+ name: Cosine Map@100
337
+ ---
338
+
339
+ # BGE base Financial Matryoshka
340
+
341
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
342
+
343
+ ## Model Details
344
+
345
+ ### Model Description
346
+ - **Model Type:** Sentence Transformer
347
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
348
+ - **Maximum Sequence Length:** 512 tokens
349
+ - **Output Dimensionality:** 768 tokens
350
+ - **Similarity Function:** Cosine Similarity
351
+ - **Training Dataset:**
352
+ - json
353
+ - **Language:** en
354
+ - **License:** apache-2.0
355
+
356
+ ### Model Sources
357
+
358
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
359
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
360
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
361
+
362
+ ### Full Model Architecture
363
+
364
+ ```
365
+ SentenceTransformer(
366
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
367
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
368
+ (2): Normalize()
369
+ )
370
+ ```
371
+
372
+ ## Usage
373
+
374
+ ### Direct Usage (Sentence Transformers)
375
+
376
+ First install the Sentence Transformers library:
377
+
378
+ ```bash
379
+ pip install -U sentence-transformers
380
+ ```
381
+
382
+ Then you can load this model and run inference.
383
+ ```python
384
+ from sentence_transformers import SentenceTransformer
385
+
386
+ # Download from the 🤗 Hub
387
+ model = SentenceTransformer("revtestuser/bge-base-financial-matryoshka")
388
+ # Run inference
389
+ sentences = [
390
+ 'Net cash used in financing activities totaled $2,614 in 2023, compared to $4,283 in 2022.',
391
+ 'What was the net cash used in financing activities in 2023 and how does it compare to 2022?',
392
+ "What are Chipotle's key strategies for business growth as discussed in their strategy?",
393
+ ]
394
+ embeddings = model.encode(sentences)
395
+ print(embeddings.shape)
396
+ # [3, 768]
397
+
398
+ # Get the similarity scores for the embeddings
399
+ similarities = model.similarity(embeddings, embeddings)
400
+ print(similarities.shape)
401
+ # [3, 3]
402
+ ```
403
+
404
+ <!--
405
+ ### Direct Usage (Transformers)
406
+
407
+ <details><summary>Click to see the direct usage in Transformers</summary>
408
+
409
+ </details>
410
+ -->
411
+
412
+ <!--
413
+ ### Downstream Usage (Sentence Transformers)
414
+
415
+ You can finetune this model on your own dataset.
416
+
417
+ <details><summary>Click to expand</summary>
418
+
419
+ </details>
420
+ -->
421
+
422
+ <!--
423
+ ### Out-of-Scope Use
424
+
425
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
426
+ -->
427
+
428
+ ## Evaluation
429
+
430
+ ### Metrics
431
+
432
+ #### Information Retrieval
433
+ * Dataset: `dim_768`
434
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
435
+
436
+ | Metric | Value |
437
+ |:--------------------|:-----------|
438
+ | cosine_accuracy@1 | 0.6971 |
439
+ | cosine_accuracy@3 | 0.82 |
440
+ | cosine_accuracy@5 | 0.8686 |
441
+ | cosine_accuracy@10 | 0.9057 |
442
+ | cosine_precision@1 | 0.6971 |
443
+ | cosine_precision@3 | 0.2733 |
444
+ | cosine_precision@5 | 0.1737 |
445
+ | cosine_precision@10 | 0.0906 |
446
+ | cosine_recall@1 | 0.6971 |
447
+ | cosine_recall@3 | 0.82 |
448
+ | cosine_recall@5 | 0.8686 |
449
+ | cosine_recall@10 | 0.9057 |
450
+ | cosine_ndcg@10 | 0.8036 |
451
+ | cosine_mrr@10 | 0.7707 |
452
+ | **cosine_map@100** | **0.7749** |
453
+
454
+ #### Information Retrieval
455
+ * Dataset: `dim_512`
456
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
457
+
458
+ | Metric | Value |
459
+ |:--------------------|:-----------|
460
+ | cosine_accuracy@1 | 0.6957 |
461
+ | cosine_accuracy@3 | 0.8229 |
462
+ | cosine_accuracy@5 | 0.8643 |
463
+ | cosine_accuracy@10 | 0.9043 |
464
+ | cosine_precision@1 | 0.6957 |
465
+ | cosine_precision@3 | 0.2743 |
466
+ | cosine_precision@5 | 0.1729 |
467
+ | cosine_precision@10 | 0.0904 |
468
+ | cosine_recall@1 | 0.6957 |
469
+ | cosine_recall@3 | 0.8229 |
470
+ | cosine_recall@5 | 0.8643 |
471
+ | cosine_recall@10 | 0.9043 |
472
+ | cosine_ndcg@10 | 0.8028 |
473
+ | cosine_mrr@10 | 0.7701 |
474
+ | **cosine_map@100** | **0.7744** |
475
+
476
+ #### Information Retrieval
477
+ * Dataset: `dim_256`
478
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
479
+
480
+ | Metric | Value |
481
+ |:--------------------|:-----------|
482
+ | cosine_accuracy@1 | 0.6871 |
483
+ | cosine_accuracy@3 | 0.8186 |
484
+ | cosine_accuracy@5 | 0.8529 |
485
+ | cosine_accuracy@10 | 0.8986 |
486
+ | cosine_precision@1 | 0.6871 |
487
+ | cosine_precision@3 | 0.2729 |
488
+ | cosine_precision@5 | 0.1706 |
489
+ | cosine_precision@10 | 0.0899 |
490
+ | cosine_recall@1 | 0.6871 |
491
+ | cosine_recall@3 | 0.8186 |
492
+ | cosine_recall@5 | 0.8529 |
493
+ | cosine_recall@10 | 0.8986 |
494
+ | cosine_ndcg@10 | 0.7952 |
495
+ | cosine_mrr@10 | 0.762 |
496
+ | **cosine_map@100** | **0.7664** |
497
+
498
+ #### Information Retrieval
499
+ * Dataset: `dim_128`
500
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
501
+
502
+ | Metric | Value |
503
+ |:--------------------|:-----------|
504
+ | cosine_accuracy@1 | 0.6686 |
505
+ | cosine_accuracy@3 | 0.8129 |
506
+ | cosine_accuracy@5 | 0.8429 |
507
+ | cosine_accuracy@10 | 0.8943 |
508
+ | cosine_precision@1 | 0.6686 |
509
+ | cosine_precision@3 | 0.271 |
510
+ | cosine_precision@5 | 0.1686 |
511
+ | cosine_precision@10 | 0.0894 |
512
+ | cosine_recall@1 | 0.6686 |
513
+ | cosine_recall@3 | 0.8129 |
514
+ | cosine_recall@5 | 0.8429 |
515
+ | cosine_recall@10 | 0.8943 |
516
+ | cosine_ndcg@10 | 0.7841 |
517
+ | cosine_mrr@10 | 0.7487 |
518
+ | **cosine_map@100** | **0.7527** |
519
+
520
+ #### Information Retrieval
521
+ * Dataset: `dim_64`
522
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
523
+
524
+ | Metric | Value |
525
+ |:--------------------|:-----------|
526
+ | cosine_accuracy@1 | 0.6471 |
527
+ | cosine_accuracy@3 | 0.7829 |
528
+ | cosine_accuracy@5 | 0.8243 |
529
+ | cosine_accuracy@10 | 0.8686 |
530
+ | cosine_precision@1 | 0.6471 |
531
+ | cosine_precision@3 | 0.261 |
532
+ | cosine_precision@5 | 0.1649 |
533
+ | cosine_precision@10 | 0.0869 |
534
+ | cosine_recall@1 | 0.6471 |
535
+ | cosine_recall@3 | 0.7829 |
536
+ | cosine_recall@5 | 0.8243 |
537
+ | cosine_recall@10 | 0.8686 |
538
+ | cosine_ndcg@10 | 0.7602 |
539
+ | cosine_mrr@10 | 0.7253 |
540
+ | **cosine_map@100** | **0.7303** |
541
+
542
+ <!--
543
+ ## Bias, Risks and Limitations
544
+
545
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
546
+ -->
547
+
548
+ <!--
549
+ ### Recommendations
550
+
551
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
552
+ -->
553
+
554
+ ## Training Details
555
+
556
+ ### Training Dataset
557
+
558
+ #### json
559
+
560
+ * Dataset: json
561
+ * Size: 6,300 training samples
562
+ * Columns: <code>positive</code> and <code>anchor</code>
563
+ * Approximate statistics based on the first 1000 samples:
564
+ | | positive | anchor |
565
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
566
+ | type | string | string |
567
+ | details | <ul><li>min: 8 tokens</li><li>mean: 44.91 tokens</li><li>max: 246 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.43 tokens</li><li>max: 43 tokens</li></ul> |
568
+ * Samples:
569
+ | positive | anchor |
570
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
571
+ | <code>Certain provisions of the final rule become effective on April 1, 2024, but the majority of the final rule’s operative provisions (including the revisions to the definition of “limited purpose bank”) become effective on January 1, 2026, with additional data collection and reporting requirements becoming effective on January 1, 2027.</code> | <code>What are the effective dates for the main provisions and additional data collection and reporting requirements of the final rule impacting AENB's compliance obligations?</code> |
572
+ | <code>Our total revenue for 2023 was $134.90 billion, an increase of 16% compared to 2022.</code> | <code>What was the total revenue for the year 2023 and the percentage increase from 2022?</code> |
573
+ | <code>As of December 31, 2023, our domestic Chief Medical Officer leads a team of 22 nephrologists in our physician leadership team as part of our domestic Office of the Chief Medical Officer.</code> | <code>How many physicians are part of the domestic Office of the Chief Medical Officer at DaVita as of December 31, 2023?</code> |
574
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
575
+ ```json
576
+ {
577
+ "loss": "MultipleNegativesRankingLoss",
578
+ "matryoshka_dims": [
579
+ 768,
580
+ 512,
581
+ 256,
582
+ 128,
583
+ 64
584
+ ],
585
+ "matryoshka_weights": [
586
+ 1,
587
+ 1,
588
+ 1,
589
+ 1,
590
+ 1
591
+ ],
592
+ "n_dims_per_step": -1
593
+ }
594
+ ```
595
+
596
+ ### Training Hyperparameters
597
+ #### Non-Default Hyperparameters
598
+
599
+ - `eval_strategy`: epoch
600
+ - `per_device_train_batch_size`: 32
601
+ - `per_device_eval_batch_size`: 16
602
+ - `gradient_accumulation_steps`: 16
603
+ - `learning_rate`: 2e-05
604
+ - `num_train_epochs`: 4
605
+ - `lr_scheduler_type`: cosine
606
+ - `warmup_ratio`: 0.1
607
+ - `fp16`: True
608
+ - `tf32`: False
609
+ - `load_best_model_at_end`: True
610
+ - `optim`: adamw_torch_fused
611
+ - `batch_sampler`: no_duplicates
612
+
613
+ #### All Hyperparameters
614
+ <details><summary>Click to expand</summary>
615
+
616
+ - `overwrite_output_dir`: False
617
+ - `do_predict`: False
618
+ - `eval_strategy`: epoch
619
+ - `prediction_loss_only`: True
620
+ - `per_device_train_batch_size`: 32
621
+ - `per_device_eval_batch_size`: 16
622
+ - `per_gpu_train_batch_size`: None
623
+ - `per_gpu_eval_batch_size`: None
624
+ - `gradient_accumulation_steps`: 16
625
+ - `eval_accumulation_steps`: None
626
+ - `learning_rate`: 2e-05
627
+ - `weight_decay`: 0.0
628
+ - `adam_beta1`: 0.9
629
+ - `adam_beta2`: 0.999
630
+ - `adam_epsilon`: 1e-08
631
+ - `max_grad_norm`: 1.0
632
+ - `num_train_epochs`: 4
633
+ - `max_steps`: -1
634
+ - `lr_scheduler_type`: cosine
635
+ - `lr_scheduler_kwargs`: {}
636
+ - `warmup_ratio`: 0.1
637
+ - `warmup_steps`: 0
638
+ - `log_level`: passive
639
+ - `log_level_replica`: warning
640
+ - `log_on_each_node`: True
641
+ - `logging_nan_inf_filter`: True
642
+ - `save_safetensors`: True
643
+ - `save_on_each_node`: False
644
+ - `save_only_model`: False
645
+ - `restore_callback_states_from_checkpoint`: False
646
+ - `no_cuda`: False
647
+ - `use_cpu`: False
648
+ - `use_mps_device`: False
649
+ - `seed`: 42
650
+ - `data_seed`: None
651
+ - `jit_mode_eval`: False
652
+ - `use_ipex`: False
653
+ - `bf16`: False
654
+ - `fp16`: True
655
+ - `fp16_opt_level`: O1
656
+ - `half_precision_backend`: auto
657
+ - `bf16_full_eval`: False
658
+ - `fp16_full_eval`: False
659
+ - `tf32`: False
660
+ - `local_rank`: 0
661
+ - `ddp_backend`: None
662
+ - `tpu_num_cores`: None
663
+ - `tpu_metrics_debug`: False
664
+ - `debug`: []
665
+ - `dataloader_drop_last`: False
666
+ - `dataloader_num_workers`: 0
667
+ - `dataloader_prefetch_factor`: None
668
+ - `past_index`: -1
669
+ - `disable_tqdm`: False
670
+ - `remove_unused_columns`: True
671
+ - `label_names`: None
672
+ - `load_best_model_at_end`: True
673
+ - `ignore_data_skip`: False
674
+ - `fsdp`: []
675
+ - `fsdp_min_num_params`: 0
676
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
677
+ - `fsdp_transformer_layer_cls_to_wrap`: None
678
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
679
+ - `deepspeed`: None
680
+ - `label_smoothing_factor`: 0.0
681
+ - `optim`: adamw_torch_fused
682
+ - `optim_args`: None
683
+ - `adafactor`: False
684
+ - `group_by_length`: False
685
+ - `length_column_name`: length
686
+ - `ddp_find_unused_parameters`: None
687
+ - `ddp_bucket_cap_mb`: None
688
+ - `ddp_broadcast_buffers`: False
689
+ - `dataloader_pin_memory`: True
690
+ - `dataloader_persistent_workers`: False
691
+ - `skip_memory_metrics`: True
692
+ - `use_legacy_prediction_loop`: False
693
+ - `push_to_hub`: False
694
+ - `resume_from_checkpoint`: None
695
+ - `hub_model_id`: None
696
+ - `hub_strategy`: every_save
697
+ - `hub_private_repo`: False
698
+ - `hub_always_push`: False
699
+ - `gradient_checkpointing`: False
700
+ - `gradient_checkpointing_kwargs`: None
701
+ - `include_inputs_for_metrics`: False
702
+ - `eval_do_concat_batches`: True
703
+ - `fp16_backend`: auto
704
+ - `push_to_hub_model_id`: None
705
+ - `push_to_hub_organization`: None
706
+ - `mp_parameters`:
707
+ - `auto_find_batch_size`: False
708
+ - `full_determinism`: False
709
+ - `torchdynamo`: None
710
+ - `ray_scope`: last
711
+ - `ddp_timeout`: 1800
712
+ - `torch_compile`: False
713
+ - `torch_compile_backend`: None
714
+ - `torch_compile_mode`: None
715
+ - `dispatch_batches`: None
716
+ - `split_batches`: None
717
+ - `include_tokens_per_second`: False
718
+ - `include_num_input_tokens_seen`: False
719
+ - `neftune_noise_alpha`: None
720
+ - `optim_target_modules`: None
721
+ - `batch_eval_metrics`: False
722
+ - `batch_sampler`: no_duplicates
723
+ - `multi_dataset_batch_sampler`: proportional
724
+
725
+ </details>
726
+
727
+ ### Training Logs
728
+ | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
729
+ |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
730
+ | 0.8122 | 10 | 1.6288 | - | - | - | - | - |
731
+ | 0.9746 | 12 | - | 0.7384 | 0.7485 | 0.7508 | 0.7013 | 0.7561 |
732
+ | 1.6244 | 20 | 0.6896 | - | - | - | - | - |
733
+ | 1.9492 | 24 | - | 0.7499 | 0.7621 | 0.7676 | 0.7220 | 0.7704 |
734
+ | 2.4365 | 30 | 0.4965 | - | - | - | - | - |
735
+ | 2.9239 | 36 | - | 0.7529 | 0.7669 | 0.7739 | 0.7302 | 0.7754 |
736
+ | 3.2487 | 40 | 0.415 | - | - | - | - | - |
737
+ | **3.8985** | **48** | **-** | **0.7527** | **0.7664** | **0.7744** | **0.7303** | **0.7749** |
738
+
739
+ * The bold row denotes the saved checkpoint.
740
+
741
+ ### Framework Versions
742
+ - Python: 3.10.12
743
+ - Sentence Transformers: 3.1.1
744
+ - Transformers: 4.41.2
745
+ - PyTorch: 2.1.2+cu121
746
+ - Accelerate: 0.34.2
747
+ - Datasets: 2.19.1
748
+ - Tokenizers: 0.19.1
749
+
750
+ ## Citation
751
+
752
+ ### BibTeX
753
+
754
+ #### Sentence Transformers
755
+ ```bibtex
756
+ @inproceedings{reimers-2019-sentence-bert,
757
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
758
+ author = "Reimers, Nils and Gurevych, Iryna",
759
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
760
+ month = "11",
761
+ year = "2019",
762
+ publisher = "Association for Computational Linguistics",
763
+ url = "https://arxiv.org/abs/1908.10084",
764
+ }
765
+ ```
766
+
767
+ #### MatryoshkaLoss
768
+ ```bibtex
769
+ @misc{kusupati2024matryoshka,
770
+ title={Matryoshka Representation Learning},
771
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
772
+ year={2024},
773
+ eprint={2205.13147},
774
+ archivePrefix={arXiv},
775
+ primaryClass={cs.LG}
776
+ }
777
+ ```
778
+
779
+ #### MultipleNegativesRankingLoss
780
+ ```bibtex
781
+ @misc{henderson2017efficient,
782
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
783
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
784
+ year={2017},
785
+ eprint={1705.00652},
786
+ archivePrefix={arXiv},
787
+ primaryClass={cs.CL}
788
+ }
789
+ ```
790
+
791
+ <!--
792
+ ## Glossary
793
+
794
+ *Clearly define terms in order to be accessible across audiences.*
795
+ -->
796
+
797
+ <!--
798
+ ## Model Card Authors
799
+
800
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
801
+ -->
802
+
803
+ <!--
804
+ ## Model Card Contact
805
+
806
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
807
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56bf6d9fb31e8bbbf008ba6482419b108bbe179611e719076e317de67ca7777f
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff