Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/ST-tramits-VL-001-5ep")
# Run inference
sentences = [
    "S'ha de comunicar la realització de focs d’esbarjo i qualsevol mena de crema de vegetació agrària en microexplotacions o petites explotacions agràries...",
    "Quin és el tipus de explotacions agràries que estan subjectes a la comunicació de focs d'esbarjo o cremes de vegetació agrària en microexplotacions?",
    'Quin és el paper de les bases de la convocatòria en la sol·licitud de subvenció?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.1241
cosine_accuracy@3 0.2263
cosine_accuracy@5 0.3358
cosine_accuracy@10 0.5328
cosine_precision@1 0.1241
cosine_precision@3 0.0754
cosine_precision@5 0.0672
cosine_precision@10 0.0533
cosine_recall@1 0.1241
cosine_recall@3 0.2263
cosine_recall@5 0.3358
cosine_recall@10 0.5328
cosine_ndcg@10 0.29
cosine_mrr@10 0.2175
cosine_map@100 0.2404

Information Retrieval

Metric Value
cosine_accuracy@1 0.1387
cosine_accuracy@3 0.2628
cosine_accuracy@5 0.3358
cosine_accuracy@10 0.5693
cosine_precision@1 0.1387
cosine_precision@3 0.0876
cosine_precision@5 0.0672
cosine_precision@10 0.0569
cosine_recall@1 0.1387
cosine_recall@3 0.2628
cosine_recall@5 0.3358
cosine_recall@10 0.5693
cosine_ndcg@10 0.3136
cosine_mrr@10 0.2375
cosine_map@100 0.2568

Information Retrieval

Metric Value
cosine_accuracy@1 0.1387
cosine_accuracy@3 0.2701
cosine_accuracy@5 0.3796
cosine_accuracy@10 0.5693
cosine_precision@1 0.1387
cosine_precision@3 0.09
cosine_precision@5 0.0759
cosine_precision@10 0.0569
cosine_recall@1 0.1387
cosine_recall@3 0.2701
cosine_recall@5 0.3796
cosine_recall@10 0.5693
cosine_ndcg@10 0.317
cosine_mrr@10 0.2406
cosine_map@100 0.2616

Information Retrieval

Metric Value
cosine_accuracy@1 0.1241
cosine_accuracy@3 0.2774
cosine_accuracy@5 0.3212
cosine_accuracy@10 0.5182
cosine_precision@1 0.1241
cosine_precision@3 0.0925
cosine_precision@5 0.0642
cosine_precision@10 0.0518
cosine_recall@1 0.1241
cosine_recall@3 0.2774
cosine_recall@5 0.3212
cosine_recall@10 0.5182
cosine_ndcg@10 0.2904
cosine_mrr@10 0.2218
cosine_map@100 0.244

Information Retrieval

Metric Value
cosine_accuracy@1 0.1095
cosine_accuracy@3 0.2555
cosine_accuracy@5 0.4015
cosine_accuracy@10 0.5401
cosine_precision@1 0.1095
cosine_precision@3 0.0852
cosine_precision@5 0.0803
cosine_precision@10 0.054
cosine_recall@1 0.1095
cosine_recall@3 0.2555
cosine_recall@5 0.4015
cosine_recall@10 0.5401
cosine_ndcg@10 0.2983
cosine_mrr@10 0.2238
cosine_map@100 0.2454

Information Retrieval

Metric Value
cosine_accuracy@1 0.1095
cosine_accuracy@3 0.2044
cosine_accuracy@5 0.3285
cosine_accuracy@10 0.5547
cosine_precision@1 0.1095
cosine_precision@3 0.0681
cosine_precision@5 0.0657
cosine_precision@10 0.0555
cosine_recall@1 0.1095
cosine_recall@3 0.2044
cosine_recall@5 0.3285
cosine_recall@10 0.5547
cosine_ndcg@10 0.2897
cosine_mrr@10 0.2102
cosine_map@100 0.2299

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 4,091 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 39.34 tokens
    • max: 164 tokens
    • min: 9 tokens
    • mean: 20.77 tokens
    • max: 49 tokens
  • Samples:
    positive anchor
    Posteriorment a l’obtenció de l’informe favorable, caldrà realitzar l’acte de comprovació en matèria d’incendis i procedir a efectuar la comunicació prèvia corresponent. Quin és el resultat esperat després d'obtenir l'informe previ en matèria d'incendis?
    El certificat tècnic és un requisit per a l'exercici d'una activitat econòmica innòcua. Quin és el paper del certificat tècnic en la Declaració responsable d'obertura?
    El document necessari per realitzar l'autoliquidació de taxa per llicència de primera ocupació és la llicència de primera ocupació de l'immoble. Quin és el document necessari per realitzar l'autoliquidació de taxa per llicència de primera ocupació?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_768_cosine_map@100 dim_512_cosine_map@100 dim_256_cosine_map@100 dim_128_cosine_map@100 dim_64_cosine_map@100
0.625 10 4.3533 - - - - - -
1.0 16 - 0.2076 0.2123 0.2055 0.1996 0.2188 0.1861
1.2461 20 2.4149 - - - - - -
1.8711 30 1.1968 - - - - - -
1.9961 32 - 0.2056 0.2318 0.2363 0.1932 0.2330 0.2255
2.4922 40 0.7983 - - - - - -
2.9922 48 - 0.2322 0.2512 0.2514 0.2385 0.2437 0.2489
3.1133 50 0.4869 - - - - - -
3.7383 60 0.3793 - - - - - -
3.9883 64 - 0.2414 0.2364 0.2365 0.2244 0.2167 0.2190
4.3594 70 0.3421 - - - - - -
4.9844 80 0.2925 0.2404 0.2568 0.2616 0.2440 0.2454 0.2299
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.1
  • Transformers: 4.44.2
  • PyTorch: 2.5.0+cu121
  • Accelerate: 1.1.0.dev0
  • Datasets: 3.1.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/ST-tramits-VIL-001-5ep

Base model

BAAI/bge-m3
Finetuned
(127)
this model

Collection including adriansanz/ST-tramits-VIL-001-5ep

Evaluation results