Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/ST-tramits-SB-003-5ep")
# Run inference
sentences = [
    "Empadronament d'un/a menor en un domicili diferent al domicili dels progenitors - Amb autorització de les persones progenitores",
    "Quin és el resultat de l'empadronament d'un/a menor en un domicili diferent al dels progenitors amb autorització?",
    'Quin és el límit de temps màxim per al període de funcionament en proves?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.3883
cosine_accuracy@3 0.6311
cosine_accuracy@5 0.7198
cosine_accuracy@10 0.8183
cosine_precision@1 0.3883
cosine_precision@3 0.2104
cosine_precision@5 0.144
cosine_precision@10 0.0818
cosine_recall@1 0.3883
cosine_recall@3 0.6311
cosine_recall@5 0.7198
cosine_recall@10 0.8183
cosine_ndcg@10 0.5968
cosine_mrr@10 0.5265
cosine_map@100 0.5338

Information Retrieval

Metric Value
cosine_accuracy@1 0.3745
cosine_accuracy@3 0.6227
cosine_accuracy@5 0.724
cosine_accuracy@10 0.8211
cosine_precision@1 0.3745
cosine_precision@3 0.2076
cosine_precision@5 0.1448
cosine_precision@10 0.0821
cosine_recall@1 0.3745
cosine_recall@3 0.6227
cosine_recall@5 0.724
cosine_recall@10 0.8211
cosine_ndcg@10 0.5928
cosine_mrr@10 0.5201
cosine_map@100 0.5274

Information Retrieval

Metric Value
cosine_accuracy@1 0.3731
cosine_accuracy@3 0.6214
cosine_accuracy@5 0.7184
cosine_accuracy@10 0.8266
cosine_precision@1 0.3731
cosine_precision@3 0.2071
cosine_precision@5 0.1437
cosine_precision@10 0.0827
cosine_recall@1 0.3731
cosine_recall@3 0.6214
cosine_recall@5 0.7184
cosine_recall@10 0.8266
cosine_ndcg@10 0.5934
cosine_mrr@10 0.5193
cosine_map@100 0.5262

Information Retrieval

Metric Value
cosine_accuracy@1 0.3953
cosine_accuracy@3 0.6186
cosine_accuracy@5 0.6963
cosine_accuracy@10 0.8252
cosine_precision@1 0.3953
cosine_precision@3 0.2062
cosine_precision@5 0.1393
cosine_precision@10 0.0825
cosine_recall@1 0.3953
cosine_recall@3 0.6186
cosine_recall@5 0.6963
cosine_recall@10 0.8252
cosine_ndcg@10 0.5983
cosine_mrr@10 0.527
cosine_map@100 0.5339

Information Retrieval

Metric Value
cosine_accuracy@1 0.3828
cosine_accuracy@3 0.6033
cosine_accuracy@5 0.706
cosine_accuracy@10 0.8155
cosine_precision@1 0.3828
cosine_precision@3 0.2011
cosine_precision@5 0.1412
cosine_precision@10 0.0816
cosine_recall@1 0.3828
cosine_recall@3 0.6033
cosine_recall@5 0.706
cosine_recall@10 0.8155
cosine_ndcg@10 0.5896
cosine_mrr@10 0.5182
cosine_map@100 0.5259

Information Retrieval

Metric Value
cosine_accuracy@1 0.3703
cosine_accuracy@3 0.5687
cosine_accuracy@5 0.6852
cosine_accuracy@10 0.7892
cosine_precision@1 0.3703
cosine_precision@3 0.1896
cosine_precision@5 0.137
cosine_precision@10 0.0789
cosine_recall@1 0.3703
cosine_recall@3 0.5687
cosine_recall@5 0.6852
cosine_recall@10 0.7892
cosine_ndcg@10 0.5679
cosine_mrr@10 0.4985
cosine_map@100 0.5068

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 2,884 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 36.18 tokens
    • max: 194 tokens
    • min: 9 tokens
    • mean: 19.77 tokens
    • max: 60 tokens
  • Samples:
    positive anchor
    I assessorem per l'optimització dels contractes de subministraments energètics. Quin és el resultat esperat del servei de millora dels contractes de serveis de llum i gas?
    Retorna en format JSON adequat Quin és el format de sortida del qüestionari de projectes específics?
    Aula Mentor és un programa d'ajuda a l'alumne que té com a objectiu principal donar suport als estudiants en la seva formació i desenvolupament personal i professional. Quin és el format del programa Aula Mentor?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_768_cosine_map@100 dim_512_cosine_map@100 dim_256_cosine_map@100 dim_128_cosine_map@100 dim_64_cosine_map@100
0.8840 10 2.6418 - - - - - -
0.9724 11 - 0.4986 0.5108 0.5014 0.4934 0.4779 0.4351
1.7680 20 1.1708 - - - - - -
1.9448 22 - 0.5197 0.5248 0.5195 0.5290 0.5052 0.4904
2.6519 30 0.5531 - - - - - -
2.9171 33 - 0.5304 0.5274 0.5196 0.5279 0.5234 0.4947
3.5359 40 0.2859 - - - - - -
3.9779 45 - 0.5256 0.5292 0.5206 0.5313 0.5174 0.5046
4.4199 50 0.2144 - - - - - -
4.8619 55 - 0.5338 0.5274 0.5262 0.5339 0.5259 0.5068
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.0
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 1.1.0.dev0
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
150
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/ST-tramits-SB-003-5ep

Base model

BAAI/bge-m3
Finetuned
(126)
this model

Collection including adriansanz/ST-tramits-SB-003-5ep

Evaluation results