metadata
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: >-
HP reviews goodwill for impairment by initially performing a qualitative
assessment to see if the fair value of a reporting unit is likely less
than its carrying amount. If more likely, a quantitative assessment
follows.
sentences:
- >-
What percentage did the Communications segment account for of the 2023
total segment income?
- How does HP determine whether goodwill impairment exists?
- >-
What was the primary reason for the actuarial gain during the year ended
December 31, 2022?
- source_sentence: >-
The consolidated financial statements and accompanying notes are listed in
Part IV, Item 15(a)(1).
sentences:
- What does Item 8 in the Annual Report on Form 10-K detail?
- >-
In which part of the Annual Report on Form 10-K are the consolidated
financial statements and accompanying notes listed?
- What is the estimated redemption rate for Chipotle gift cards?
- source_sentence: >-
American Express maintains direct relationships with Card Members and
merchants, which provides it with direct access to information at both
ends of the transaction, distinguishing its integrated payments platform
from the bankcard networks.
sentences:
- >-
How does American Express's integrated payments platform differentiate
itself from bankcard networks?
- How are contingent consideration liabilities valued?
- >-
How does Chipotle calculate revenue recognition for redeemed Chipotle
Rewards?
- source_sentence: >-
Open Value agreements are a simple, cost-effective way to acquire the
latest Microsoft technology. These agreements are designed for small and
medium organizations that want to license cloud services and on-premises
software over a three-year period. Under Open Value agreements,
organizations can elect to purchase perpetual licenses or subscribe to
licenses and SA is included.
sentences:
- >-
How are unpaid losses and loss expenses calculated in the financial
statements of an insurance and reinsurance company?
- >-
What type of financial documents are included in Part IV, Item 15(a)(1)
of the Annual Report on Form 10-K?
- >-
What type of organizations is the Open Value agreements designed for and
what licenses does it include?
- source_sentence: >-
The company's financial report indicates that the pre-tax amounts of gains
(losses) from foreign currency forward exchange contracts designated as
cash flow hedges were gains of $82 million in 2021, gains of $103 million
in 2022, and losses of $2 million in 2023.
sentences:
- >-
What were the pre-tax amounts of (gains) losses from foreign currency
forward exchange contracts designated as cash flow hedges for the years
ended December 31 from 2021 to 2023?
- >-
What is the projected change in income before income taxes if the 2023
discount rate for the U.S. defined benefit pension and retiree health
benefit plans changes by a quarter percentage point?
- >-
What sources contribute to Ford Credit’s liquidity as of December 31,
2023, and what was their total value?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6914285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8171428571428572
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.87
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9128571428571428
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6914285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2723809523809524
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.174
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09128571428571428
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6914285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8171428571428572
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.87
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9128571428571428
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8015002951126636
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7659410430839002
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.76947397245476
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6642857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.81
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8557142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8971428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6642857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17114285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0897142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6642857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.81
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8557142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8971428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7834209531598721
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7467698412698411
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7514515853623652
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.62
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7671428571428571
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8171428571428572
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8742857142857143
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.62
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2557142857142857
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1634285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08742857142857142
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.62
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7671428571428571
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8171428571428572
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8742857142857143
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7453405840762105
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7042613378684806
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.70911408987056
name: Cosine Map@100
BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: nomic-ai/modernbert-embed-base
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Sorour/modernbert-financial-matryoshka")
# Run inference
sentences = [
"The company's financial report indicates that the pre-tax amounts of gains (losses) from foreign currency forward exchange contracts designated as cash flow hedges were gains of $82 million in 2021, gains of $103 million in 2022, and losses of $2 million in 2023.",
'What were the pre-tax amounts of (gains) losses from foreign currency forward exchange contracts designated as cash flow hedges for the years ended December 31 from 2021 to 2023?',
'What is the projected change in income before income taxes if the 2023 discount rate for the U.S. defined benefit pension and retiree health benefit plans changes by a quarter percentage point?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_256
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_256 | dim_64 |
---|---|---|---|
cosine_accuracy@1 | 0.6914 | 0.6643 | 0.62 |
cosine_accuracy@3 | 0.8171 | 0.81 | 0.7671 |
cosine_accuracy@5 | 0.87 | 0.8557 | 0.8171 |
cosine_accuracy@10 | 0.9129 | 0.8971 | 0.8743 |
cosine_precision@1 | 0.6914 | 0.6643 | 0.62 |
cosine_precision@3 | 0.2724 | 0.27 | 0.2557 |
cosine_precision@5 | 0.174 | 0.1711 | 0.1634 |
cosine_precision@10 | 0.0913 | 0.0897 | 0.0874 |
cosine_recall@1 | 0.6914 | 0.6643 | 0.62 |
cosine_recall@3 | 0.8171 | 0.81 | 0.7671 |
cosine_recall@5 | 0.87 | 0.8557 | 0.8171 |
cosine_recall@10 | 0.9129 | 0.8971 | 0.8743 |
cosine_ndcg@10 | 0.8015 | 0.7834 | 0.7453 |
cosine_mrr@10 | 0.7659 | 0.7468 | 0.7043 |
cosine_map@100 | 0.7695 | 0.7515 | 0.7091 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 6,300 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 9 tokens
- mean: 47.08 tokens
- max: 998 tokens
- min: 9 tokens
- mean: 20.19 tokens
- max: 41 tokens
- Samples:
positive anchor Item 8 includes Financial Statements and Supplementary Data.
What type of data is found in Item 8 of detailed financial documentation?
HP records revenue from the sale of equipment under sales-type leases as revenue at the commencement of the lease. This method is applied unless certain conditions such as customer acceptance remain uncertain or significant obligations to the customer remain unfulfilled.
How does HP recognize revenue from the sale of equipment under sales-type leases?
The company maintains insurance coverage for general liability, property, business interruption, terrorism, and other risks with respect to their business for all of their owned and leased hotels.
What types of risks are usually covered by the company's insurance policies?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 256, 64 ], "matryoshka_weights": [ 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|
0.8122 | 10 | 9.4544 | - | - | - |
1.0 | 13 | - | 0.7799 | 0.7650 | 0.7097 |
0.8122 | 10 | 3.1908 | - | - | - |
1.0 | 13 | - | 0.7952 | 0.7769 | 0.7259 |
1.5685 | 20 | 1.8807 | - | - | - |
2.0 | 26 | - | 0.8001 | 0.7833 | 0.7409 |
2.3249 | 30 | 1.7141 | - | - | - |
3.0 | 39 | - | 0.8023 | 0.7819 | 0.7460 |
3.0812 | 40 | 1.3672 | - | - | - |
3.731 | 48 | - | 0.8015 | 0.7834 | 0.7453 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}