Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("elsayovita/bge-base-financial-matryoshka-testing")
# Run inference
sentences = [
    'Electronic Arts paid cash dividends totaling $210 million during the fiscal year ended March 31, 2023.',
    'What was the total cash dividend paid by Electronic Arts in the fiscal year ended March 31, 2023?',
    "What was the SRO's accrued amount as a receivable for CAT implementation expenses as of December 31, 2023?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6843
cosine_accuracy@3 0.8129
cosine_accuracy@5 0.86
cosine_accuracy@10 0.8986
cosine_precision@1 0.6843
cosine_precision@3 0.271
cosine_precision@5 0.172
cosine_precision@10 0.0899
cosine_recall@1 0.6843
cosine_recall@3 0.8129
cosine_recall@5 0.86
cosine_recall@10 0.8986
cosine_ndcg@10 0.7929
cosine_mrr@10 0.7589
cosine_map@100 0.763

Information Retrieval

Metric Value
cosine_accuracy@1 0.6857
cosine_accuracy@3 0.82
cosine_accuracy@5 0.8586
cosine_accuracy@10 0.9057
cosine_precision@1 0.6857
cosine_precision@3 0.2733
cosine_precision@5 0.1717
cosine_precision@10 0.0906
cosine_recall@1 0.6857
cosine_recall@3 0.82
cosine_recall@5 0.8586
cosine_recall@10 0.9057
cosine_ndcg@10 0.7964
cosine_mrr@10 0.7614
cosine_map@100 0.7649

Information Retrieval

Metric Value
cosine_accuracy@1 0.6771
cosine_accuracy@3 0.8043
cosine_accuracy@5 0.8571
cosine_accuracy@10 0.89
cosine_precision@1 0.6771
cosine_precision@3 0.2681
cosine_precision@5 0.1714
cosine_precision@10 0.089
cosine_recall@1 0.6771
cosine_recall@3 0.8043
cosine_recall@5 0.8571
cosine_recall@10 0.89
cosine_ndcg@10 0.7846
cosine_mrr@10 0.7506
cosine_map@100 0.755

Information Retrieval

Metric Value
cosine_accuracy@1 0.6614
cosine_accuracy@3 0.7957
cosine_accuracy@5 0.8271
cosine_accuracy@10 0.88
cosine_precision@1 0.6614
cosine_precision@3 0.2652
cosine_precision@5 0.1654
cosine_precision@10 0.088
cosine_recall@1 0.6614
cosine_recall@3 0.7957
cosine_recall@5 0.8271
cosine_recall@10 0.88
cosine_ndcg@10 0.7729
cosine_mrr@10 0.7385
cosine_map@100 0.743

Information Retrieval

Metric Value
cosine_accuracy@1 0.6129
cosine_accuracy@3 0.7629
cosine_accuracy@5 0.7957
cosine_accuracy@10 0.8471
cosine_precision@1 0.6129
cosine_precision@3 0.2543
cosine_precision@5 0.1591
cosine_precision@10 0.0847
cosine_recall@1 0.6129
cosine_recall@3 0.7629
cosine_recall@5 0.7957
cosine_recall@10 0.8471
cosine_ndcg@10 0.7316
cosine_mrr@10 0.6946
cosine_map@100 0.7002

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 46.86 tokens
    • max: 252 tokens
    • min: 7 tokens
    • mean: 20.5 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    For the year ended December 31, 2023, the average balance for savings and transaction accounts was $86,102 and the interest expense for these accounts was $3,357. What was the average balance and interest expense for savings and transaction accounts in the year 2023?
    Limits are used at various levels and types to manage the size of liquidity exposures, relative to acceptable risk levels according the the organization's liquidity risk tolerance. What is the purpose of the liquidity risk limits used by the organization?
    Value-Based Care refers to the goal of incentivizing healthcare providers to simultaneously increase quality while lowering the cost of care for patients. What is the primary goal of value-based care according to the company?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.8122 10 1.4746 - - - - -
0.9746 12 - 0.7378 0.7470 0.7589 0.6941 0.7563
1.6244 20 0.6694 - - - - -
1.9492 24 - 0.743 0.755 0.7649 0.7002 0.763
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for elsayovita/bge-base-financial-matryoshka-testing

Finetuned
(254)
this model

Evaluation results