pavanmantha's picture
Add new SentenceTransformer model.
76f00c3 verified
|
raw
history blame
26.5 kB
metadata
language:
  - en
license: apache-2.0
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:6300
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
datasets: []
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
widget:
  - source_sentence: >-
      In 2023, total government-based programs, including Medicare, Medicaid,
      and other government-based programs, contributed 67% to the U.S. dialysis
      patient service revenues.
    sentences:
      - >-
        How does Iron Mountain's reported EPS fully diluted from net income in
        2023 compare to 2022?
      - >-
        What was the total percentage of U.S. dialysis patient service revenues
        coming from government-based programs in 2023?
      - What year did the company introduce multiplex theatres?
  - source_sentence: >-
      The gross realized losses on sales of AFS debt associated for 2023
      amounted to $514 million, indicating a negative financial outcome from
      these transactions during the year.
    sentences:
      - >-
        What were the gross realized losses on sales of AFS debt securities in
        2023?
      - >-
        How is information about legal proceedings described in the Annual
        Report on Form 10-K?
      - >-
        What sections are included alongside the Financial Statements in this
        report?
  - source_sentence: >-
      Other income, net, changed favorably by $215 million in the year ended
      December 31, 2023 as compared to the year ended December 31, 2022. The
      favorable change was primarily due to fluctuations in foreign currency
      exchange rates on our intercompany balances.
    sentences:
      - >-
        What was the monetary change in other income (expense), net, from 2022
        to 2023?
      - >-
        What strategic actions has Walmart International taken over the last
        three years?
      - What is described under Item 8 in the context of a financial document?
  - source_sentence: >-
      Segments The Company manages its business primarily on a geographic basis.
      The Company’s reportable segments consist of the Americas, Europe, Greater
      China, Japan and Rest of Asia Pacific.
    sentences:
      - >-
        What is the total debt repayment obligation mentioned in the financial
        outline?
      - What segments does the Company manage its business on?
      - >-
        What is the title of Item 8 which contains page information in a
        financial document?
  - source_sentence: >-
      Item 8 typically refers to Financial Statements and Supplementary Data in
      a document.
    sentences:
      - What is the primary function of Etsy's online marketplaces?
      - >-
        What are the maximum leverage ratios specified under the Senior Credit
        Facilities for the periods ending fourth quarter of 2023 and first
        quarter of 2024?
      - What does Item 8 in a document usually represent?
pipeline_tag: sentence-similarity
model-index:
  - name: BGE base Financial Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.7057142857142857
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8371428571428572
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8742857142857143
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9128571428571428
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7057142857142857
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.27904761904761904
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17485714285714282
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09128571428571428
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7057142857142857
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8371428571428572
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8742857142857143
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9128571428571428
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8114149232737874
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7786632653061224
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7821804400415905
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.7057142857142857
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8328571428571429
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8714285714285714
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9128571428571428
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7057142857142857
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2776190476190476
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17428571428571427
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09128571428571428
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7057142857142857
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8328571428571429
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8714285714285714
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9128571428571428
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8108495475926208
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7780068027210884
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7816465534941897
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.7157142857142857
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8342857142857143
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.87
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9057142857142857
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7157142857142857
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.27809523809523806
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.174
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09057142857142855
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7157142857142857
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8342857142857143
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.87
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9057142857142857
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8123157823677117
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7823004535147391
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7862892219643212
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.6928571428571428
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8171428571428572
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8614285714285714
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9028571428571428
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6928571428571428
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2723809523809524
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17228571428571426
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09028571428571427
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6928571428571428
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8171428571428572
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8614285714285714
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9028571428571428
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7975011441256048
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7638248299319729
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7673061455577762
            name: Cosine Map@100

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("pavanmantha/bge-base-en-honsec10k-embed")
# Run inference
sentences = [
    'Item 8 typically refers to Financial Statements and Supplementary Data in a document.',
    'What does Item 8 in a document usually represent?',
    'What are the maximum leverage ratios specified under the Senior Credit Facilities for the periods ending fourth quarter of 2023 and first quarter of 2024?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7057
cosine_accuracy@3 0.8371
cosine_accuracy@5 0.8743
cosine_accuracy@10 0.9129
cosine_precision@1 0.7057
cosine_precision@3 0.279
cosine_precision@5 0.1749
cosine_precision@10 0.0913
cosine_recall@1 0.7057
cosine_recall@3 0.8371
cosine_recall@5 0.8743
cosine_recall@10 0.9129
cosine_ndcg@10 0.8114
cosine_mrr@10 0.7787
cosine_map@100 0.7822

Information Retrieval

Metric Value
cosine_accuracy@1 0.7057
cosine_accuracy@3 0.8329
cosine_accuracy@5 0.8714
cosine_accuracy@10 0.9129
cosine_precision@1 0.7057
cosine_precision@3 0.2776
cosine_precision@5 0.1743
cosine_precision@10 0.0913
cosine_recall@1 0.7057
cosine_recall@3 0.8329
cosine_recall@5 0.8714
cosine_recall@10 0.9129
cosine_ndcg@10 0.8108
cosine_mrr@10 0.778
cosine_map@100 0.7816

Information Retrieval

Metric Value
cosine_accuracy@1 0.7157
cosine_accuracy@3 0.8343
cosine_accuracy@5 0.87
cosine_accuracy@10 0.9057
cosine_precision@1 0.7157
cosine_precision@3 0.2781
cosine_precision@5 0.174
cosine_precision@10 0.0906
cosine_recall@1 0.7157
cosine_recall@3 0.8343
cosine_recall@5 0.87
cosine_recall@10 0.9057
cosine_ndcg@10 0.8123
cosine_mrr@10 0.7823
cosine_map@100 0.7863

Information Retrieval

Metric Value
cosine_accuracy@1 0.6929
cosine_accuracy@3 0.8171
cosine_accuracy@5 0.8614
cosine_accuracy@10 0.9029
cosine_precision@1 0.6929
cosine_precision@3 0.2724
cosine_precision@5 0.1723
cosine_precision@10 0.0903
cosine_recall@1 0.6929
cosine_recall@3 0.8171
cosine_recall@5 0.8614
cosine_recall@10 0.9029
cosine_ndcg@10 0.7975
cosine_mrr@10 0.7638
cosine_map@100 0.7673

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 44.43 tokens
    • max: 248 tokens
    • min: 7 tokens
    • mean: 20.52 tokens
    • max: 45 tokens
  • Samples:
    positive anchor
    Net deferred tax liabilities $
    ITEM 3. LEGAL PROCEEDINGS Please see the legal proceedings described in Note 21. Commitments and Contingencies included in Item 8 of Part II of this report. In what part and item of the report is Note 21 located?
    During fiscal year 2023, we repurchased 10.4 million shares for approximately $1,295 million. What total amount was spent on share repurchases during fiscal year 2023?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_768_cosine_map@100
0.8122 10 1.1537 - - - -
0.9746 12 - 0.7517 0.7620 0.7633 0.7636
1.6244 20 0.4387 - - - -
1.9492 24 - 0.7616 0.7802 0.7796 0.7769
2.4365 30 0.3113 - - - -
2.9239 36 - 0.7668 0.7837 0.7809 0.7821
3.2487 40 0.2554 - - - -
3.8985 48 - 0.7673 0.7863 0.7816 0.7822
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}