Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/sqv-v3-10ep")
# Run inference
sentences = [
    "Permet sol·licitar l’autorització per a l’ús comú especial de la via pública per a reserves temporals d’estacionament i espai públic per: càrrega/descàrrega de materials diversos davant d'una obra;",
    "Quins són els materials que es poden càrregar/descarregar en l'ocupació i reserves temporals amb càrrega/descàrrega de materials?",
    'Quin és el tràmit per canviar el domicili del permís de conducció i del permís de circulació?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.1848
cosine_accuracy@3 0.5109
cosine_accuracy@5 0.6304
cosine_accuracy@10 0.7065
cosine_precision@1 0.1848
cosine_precision@3 0.1703
cosine_precision@5 0.1261
cosine_precision@10 0.0707
cosine_recall@1 0.1848
cosine_recall@3 0.5109
cosine_recall@5 0.6304
cosine_recall@10 0.7065
cosine_ndcg@10 0.4495
cosine_mrr@10 0.366
cosine_map@100 0.3751

Information Retrieval

Metric Value
cosine_accuracy@1 0.2065
cosine_accuracy@3 0.5217
cosine_accuracy@5 0.6196
cosine_accuracy@10 0.7065
cosine_precision@1 0.2065
cosine_precision@3 0.1739
cosine_precision@5 0.1239
cosine_precision@10 0.0707
cosine_recall@1 0.2065
cosine_recall@3 0.5217
cosine_recall@5 0.6196
cosine_recall@10 0.7065
cosine_ndcg@10 0.4552
cosine_mrr@10 0.3741
cosine_map@100 0.3836

Information Retrieval

Metric Value
cosine_accuracy@1 0.1957
cosine_accuracy@3 0.5
cosine_accuracy@5 0.587
cosine_accuracy@10 0.663
cosine_precision@1 0.1957
cosine_precision@3 0.1667
cosine_precision@5 0.1174
cosine_precision@10 0.0663
cosine_recall@1 0.1957
cosine_recall@3 0.5
cosine_recall@5 0.587
cosine_recall@10 0.663
cosine_ndcg@10 0.4325
cosine_mrr@10 0.3577
cosine_map@100 0.3691

Information Retrieval

Metric Value
cosine_accuracy@1 0.1848
cosine_accuracy@3 0.5109
cosine_accuracy@5 0.5978
cosine_accuracy@10 0.6848
cosine_precision@1 0.1848
cosine_precision@3 0.1703
cosine_precision@5 0.1196
cosine_precision@10 0.0685
cosine_recall@1 0.1848
cosine_recall@3 0.5109
cosine_recall@5 0.5978
cosine_recall@10 0.6848
cosine_ndcg@10 0.4326
cosine_mrr@10 0.3513
cosine_map@100 0.3601

Information Retrieval

Metric Value
cosine_accuracy@1 0.1413
cosine_accuracy@3 0.3913
cosine_accuracy@5 0.5435
cosine_accuracy@10 0.6522
cosine_precision@1 0.1413
cosine_precision@3 0.1304
cosine_precision@5 0.1087
cosine_precision@10 0.0652
cosine_recall@1 0.1413
cosine_recall@3 0.3913
cosine_recall@5 0.5435
cosine_recall@10 0.6522
cosine_ndcg@10 0.3875
cosine_mrr@10 0.3033
cosine_map@100 0.3131

Information Retrieval

Metric Value
cosine_accuracy@1 0.1304
cosine_accuracy@3 0.3261
cosine_accuracy@5 0.4239
cosine_accuracy@10 0.5761
cosine_precision@1 0.1304
cosine_precision@3 0.1087
cosine_precision@5 0.0848
cosine_precision@10 0.0576
cosine_recall@1 0.1304
cosine_recall@3 0.3261
cosine_recall@5 0.4239
cosine_recall@10 0.5761
cosine_ndcg@10 0.3304
cosine_mrr@10 0.2548
cosine_map@100 0.266

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 828 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 828 samples:
    positive anchor
    type string string
    details
    • min: 5 tokens
    • mean: 41.95 tokens
    • max: 117 tokens
    • min: 9 tokens
    • mean: 20.81 tokens
    • max: 50 tokens
  • Samples:
    positive anchor
    Consultar l'estat tributari d'un contribuent. Us permet consultar l'estat dels rebuts i liquidacions que estan a nom del contribuent titular d'un certificat electrònic, així com els elements que configuren el càlcul per determinar el deute tributari de cadascun d'ells. Com puc consultar l'estat tributari d'un contribuent?
    L'informe facultatiu servirà per tramitar una autorització de residència temporal per arrelament social. Quin és el tràmit relacionat amb la residència a l'Ajuntament?
    Aquesta targeta, és el document que dona dret a persones físiques o jurídiques titulars de vehicles adaptats destinats al transport col·lectiu de persones amb discapacitat... Quin és el benefici de tenir la targeta d'aparcament de transport col·lectiu per a les persones amb discapacitat?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9231 3 - 0.3751 0.3131 0.3601 0.3691 0.266 0.3836
1.8462 6 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
2.7692 9 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
3.0769 10 0.6783 - - - - - -
4.0 13 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
4.9231 16 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
5.8462 19 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
6.1538 20 0.2906 - - - - - -
6.7692 22 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
8.0 26 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
8.9231 29 - 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
9.2308 30 0.1565 0.3751 0.3131 0.3601 0.3691 0.2660 0.3836
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.35.0.dev0
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/sqv-v3-10ep

Base model

BAAI/bge-m3
Finetuned
(126)
this model

Evaluation results