Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sh4796/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'Management assessed the effectiveness of the company’s internal control over financial reporting as of December 31, 2023. In making this assessment, we used the criteria set forth by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) in Internal Control—Integrated Framework (2013).',
    'What criteria did Caterpillar Inc. use to assess the effectiveness of its internal control over financial reporting as of December 31, 2023?',
    'What are the primary components of U.S. sales volumes for Ford?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.69
cosine_accuracy@3 0.8386
cosine_accuracy@5 0.87
cosine_accuracy@10 0.92
cosine_precision@1 0.69
cosine_precision@3 0.2795
cosine_precision@5 0.174
cosine_precision@10 0.092
cosine_recall@1 0.69
cosine_recall@3 0.8386
cosine_recall@5 0.87
cosine_recall@10 0.92
cosine_ndcg@10 0.8078
cosine_mrr@10 0.7718
cosine_map@100 0.7745

Information Retrieval

Metric Value
cosine_accuracy@1 0.7014
cosine_accuracy@3 0.8343
cosine_accuracy@5 0.8671
cosine_accuracy@10 0.9171
cosine_precision@1 0.7014
cosine_precision@3 0.2781
cosine_precision@5 0.1734
cosine_precision@10 0.0917
cosine_recall@1 0.7014
cosine_recall@3 0.8343
cosine_recall@5 0.8671
cosine_recall@10 0.9171
cosine_ndcg@10 0.8099
cosine_mrr@10 0.7756
cosine_map@100 0.7785

Information Retrieval

Metric Value
cosine_accuracy@1 0.6929
cosine_accuracy@3 0.8286
cosine_accuracy@5 0.8614
cosine_accuracy@10 0.91
cosine_precision@1 0.6929
cosine_precision@3 0.2762
cosine_precision@5 0.1723
cosine_precision@10 0.091
cosine_recall@1 0.6929
cosine_recall@3 0.8286
cosine_recall@5 0.8614
cosine_recall@10 0.91
cosine_ndcg@10 0.8023
cosine_mrr@10 0.7679
cosine_map@100 0.7712

Information Retrieval

Metric Value
cosine_accuracy@1 0.6729
cosine_accuracy@3 0.8171
cosine_accuracy@5 0.85
cosine_accuracy@10 0.8829
cosine_precision@1 0.6729
cosine_precision@3 0.2724
cosine_precision@5 0.17
cosine_precision@10 0.0883
cosine_recall@1 0.6729
cosine_recall@3 0.8171
cosine_recall@5 0.85
cosine_recall@10 0.8829
cosine_ndcg@10 0.7823
cosine_mrr@10 0.7496
cosine_map@100 0.7543

Information Retrieval

Metric Value
cosine_accuracy@1 0.64
cosine_accuracy@3 0.79
cosine_accuracy@5 0.83
cosine_accuracy@10 0.8743
cosine_precision@1 0.64
cosine_precision@3 0.2633
cosine_precision@5 0.166
cosine_precision@10 0.0874
cosine_recall@1 0.64
cosine_recall@3 0.79
cosine_recall@5 0.83
cosine_recall@10 0.8743
cosine_ndcg@10 0.7602
cosine_mrr@10 0.7234
cosine_map@100 0.7279

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 8 tokens
    • mean: 44.33 tokens
    • max: 289 tokens
    • min: 9 tokens
    • mean: 20.43 tokens
    • max: 46 tokens
  • Samples:
    positive anchor
    The Company defines fair value as the price received to transfer an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date. In accordance with ASC 820, Fair Value Measurements and Disclosures, the Company uses the fair value hierarchy which prioritizes the inputs used to measure fair value. The hierarchy gives the highest priority to unadjusted quoted prices in active markets for identical assets or liabilities (Level 1), observable inputs other than quoted prices (Level 2), and unobservable inputs (Level 3). What is the role of Level 1, Level 2, and Level 3 inputs in the fair value hierarchy according to ASC 820?
    In the event of conversion of the Notes, if shares are delivered to the Company under the Capped Call Transactions, they will offset the dilutive effect of the shares that the Company would issue under the Notes. What happens to the dilutive effect of shares issued under the Notes if shares are delivered to the Company under the Capped Call Transactions during the conversion?
    Marketing expenses increased $48.8 million to $759.2 million in the year ended December 31, 2023 compared to the year ended December 31, 2022. How much did the marketing expenses increase in the year ended December 31, 2023?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_map@100 dim_512_cosine_map@100 dim_256_cosine_map@100 dim_128_cosine_map@100 dim_64_cosine_map@100
0.8122 10 1.5604 - - - - -
0.9746 12 - 0.7538 0.7540 0.7483 0.7284 0.6906
1.6244 20 0.6618 - - - - -
1.9492 24 - 0.7654 0.7632 0.7582 0.7424 0.7186
2.4365 30 0.4579 - - - - -
2.9239 36 - 0.7686 0.7646 0.7619 0.7459 0.7238
3.2487 40 0.3995 - - - - -
3.8985 48 - 0.7694 0.7633 0.7641 0.7449 0.7225
0.8122 10 0.3798 - - - - -
0.9746 12 - 0.7713 0.7685 0.7691 0.7489 0.7249
1.6244 20 0.2958 - - - - -
1.9492 24 - 0.7726 0.7699 0.7688 0.7517 0.7283
2.4365 30 0.2273 - - - - -
2.9239 36 - 0.7742 0.7761 0.7734 0.7532 0.7276
3.2487 40 0.2136 - - - - -
3.8985 48 - 0.7745 0.7785 0.7712 0.7543 0.7279
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.0
  • Transformers: 4.41.2
  • PyTorch: 2.2.0a0+6a974be
  • Accelerate: 0.27.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
4
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sh4796/bge-base-financial-matryoshka

Finetuned
(254)
this model

Evaluation results