Edit model card

test qwen2 Matryoshka

This is a sentence-transformers model finetuned from actualdata/bilingual-embedding-large. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: actualdata/bilingual-embedding-large
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BilingualModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sylvain471/bl_ademe_large")
# Run inference
sentences = [
    " Le PCS intègre l'énergie libérée par la condensation de l'eau après la combustion, tandis que le PCI ne l'intègre pas.",
    " Qu'est-ce qui distingue le Pouvoir Calorifique Supérieur (PCS) du Pouvoir Calorifique Inférieur (PCI) ?",
    " La proportion d'énergie utilisée dans l'eau chaude sanitaire pour les résidences principales (métropole uniquement) est-elle supérieure à 1 % ?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.3168
cosine_accuracy@3 0.4254
cosine_accuracy@5 0.477
cosine_accuracy@10 0.5562
cosine_precision@1 0.3168
cosine_precision@3 0.1418
cosine_precision@5 0.0954
cosine_precision@10 0.0556
cosine_recall@1 0.3168
cosine_recall@3 0.4254
cosine_recall@5 0.477
cosine_recall@10 0.5562
cosine_ndcg@10 0.4276
cosine_mrr@10 0.3876
cosine_map@100 0.3994

Information Retrieval

Metric Value
cosine_accuracy@1 0.3223
cosine_accuracy@3 0.4236
cosine_accuracy@5 0.4733
cosine_accuracy@10 0.5488
cosine_precision@1 0.3223
cosine_precision@3 0.1412
cosine_precision@5 0.0947
cosine_precision@10 0.0549
cosine_recall@1 0.3223
cosine_recall@3 0.4236
cosine_recall@5 0.4733
cosine_recall@10 0.5488
cosine_ndcg@10 0.4272
cosine_mrr@10 0.3894
cosine_map@100 0.4018

Information Retrieval

Metric Value
cosine_accuracy@1 0.3315
cosine_accuracy@3 0.4236
cosine_accuracy@5 0.4751
cosine_accuracy@10 0.5488
cosine_precision@1 0.3315
cosine_precision@3 0.1412
cosine_precision@5 0.095
cosine_precision@10 0.0549
cosine_recall@1 0.3315
cosine_recall@3 0.4236
cosine_recall@5 0.4751
cosine_recall@10 0.5488
cosine_ndcg@10 0.4309
cosine_mrr@10 0.3943
cosine_map@100 0.4065

Information Retrieval

Metric Value
cosine_accuracy@1 0.3076
cosine_accuracy@3 0.4125
cosine_accuracy@5 0.4678
cosine_accuracy@10 0.5396
cosine_precision@1 0.3076
cosine_precision@3 0.1375
cosine_precision@5 0.0936
cosine_precision@10 0.054
cosine_recall@1 0.3076
cosine_recall@3 0.4125
cosine_recall@5 0.4678
cosine_recall@10 0.5396
cosine_ndcg@10 0.4156
cosine_mrr@10 0.3769
cosine_map@100 0.3896

Information Retrieval

Metric Value
cosine_accuracy@1 0.2965
cosine_accuracy@3 0.4052
cosine_accuracy@5 0.4475
cosine_accuracy@10 0.5396
cosine_precision@1 0.2965
cosine_precision@3 0.1351
cosine_precision@5 0.0895
cosine_precision@10 0.054
cosine_recall@1 0.2965
cosine_recall@3 0.4052
cosine_recall@5 0.4475
cosine_recall@10 0.5396
cosine_ndcg@10 0.4079
cosine_mrr@10 0.3672
cosine_map@100 0.3789

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,885 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 32.82 tokens
    • max: 185 tokens
    • min: 2 tokens
    • mean: 26.77 tokens
    • max: 71 tokens
  • Samples:
    positive anchor
    Lorsque le traitement spécifique par catégorie de déchets produits par la Personne Morale est inconnu, le taux moyen local ou sectoriel de traitement en fin de vie (incinération, mise en décharge, recyclage, compostage, etc.) est utilisé. Le transport est également un paramètre à intégrer au calcul. Quels sont les paramètres clés par type de traitement à prendre en compte pour réaliser un bilan d'émissions de gaz à effet de serre ?
    Une analyse de cycle de vie fournit un moyen efficace et systémique pour évaluer les impacts environnementaux d’un produit, d’un service, d’une entreprise ou d’un procédé. Qu'est-ce qu'une évaluation de cycle de vie (ACV) ?
    1 469,2 t CO2e. Quel est le total des émissions annuelles de l'entreprise GAMMA ?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            896,
            512,
            256,
            128
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • gradient_accumulation_steps: 8
  • learning_rate: 2e-05
  • num_train_epochs: 20
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_896_cosine_map@100
0.2614 10 5.4141 - - - - -
0.5229 20 4.2823 - - - - -
0.7843 30 3.0162 - - - - -
0.9935 38 - 0.3636 0.3170 0.3407 0.3566 0.3668
1.0458 40 2.5846 - - - - -
1.3072 50 2.2069 - - - - -
1.5686 60 1.7585 - - - - -
1.8301 70 1.3099 - - - - -
1.9869 76 - 0.3979 0.3353 0.3726 0.3895 0.3983
2.0915 80 1.1449 - - - - -
2.3529 90 1.0137 - - - - -
2.6144 100 0.6402 - - - - -
2.8758 110 0.4931 - - - - -
2.9804 114 - 0.4026 0.3568 0.3808 0.3882 0.3992
3.1373 120 0.4662 - - - - -
3.3987 130 0.3782 - - - - -
3.6601 140 0.2696 - - - - -
3.9216 150 0.2478 - - - - -
4.0 153 - 0.3805 0.3460 0.3613 0.3680 0.3850
4.1830 160 0.2655 - - - - -
4.4444 170 0.1952 - - - - -
4.7059 180 0.1494 - - - - -
4.9673 190 0.1482 - - - - -
4.9935 191 - 0.3806 0.3619 0.3702 0.3799 0.3814
5.2288 200 0.161 - - - - -
5.4902 210 0.1282 - - - - -
5.7516 220 0.0888 - - - - -
5.9869 229 - 0.3936 0.3685 0.3758 0.3870 0.3916
6.0131 230 0.1042 - - - - -
6.2745 240 0.126 - - - - -
6.5359 250 0.103 - - - - -
6.7974 260 0.0467 - - - - -
6.9804 267 - 0.4022 0.3689 0.3897 0.3950 0.4022
7.0588 270 0.0581 - - - - -
7.3203 280 0.0728 - - - - -
7.5817 290 0.064 - - - - -
7.8431 300 0.0271 - - - - -
8.0 306 - 0.4010 0.3756 0.3872 0.3988 0.4021
8.1046 310 0.0452 - - - - -
8.3660 320 0.0613 - - - - -
8.6275 330 0.0294 - - - - -
8.8889 340 0.0396 - - - - -
8.9935 344 - 0.3914 0.3722 0.3801 0.3916 0.3939
9.1503 350 0.024 - - - - -
9.4118 360 0.0253 - - - - -
9.6732 370 0.017 - - - - -
9.9346 380 0.0163 - - - - -
9.9869 382 - 0.3901 0.3660 0.3796 0.3892 0.3904
10.1961 390 0.0191 - - - - -
10.4575 400 0.017 - - - - -
10.7190 410 0.0108 - - - - -
10.9804 420 0.0118 0.3994 0.3789 0.3896 0.4065 0.4018
11.2418 430 0.0111 - - - - -
11.5033 440 0.011 - - - - -
11.7647 450 0.0052 - - - - -
12.0 459 - 0.4030 0.3772 0.3986 0.4034 0.3999
12.0261 460 0.0144 - - - - -
12.2876 470 0.0068 - - - - -
12.5490 480 0.0061 - - - - -
12.8105 490 0.0039 - - - - -
12.9935 497 - 0.4022 0.3733 0.3869 0.3995 0.3983
13.0719 500 0.0074 - - - - -
13.3333 510 0.005 - - - - -
13.5948 520 0.0045 - - - - -
13.8562 530 0.0035 - - - - -
13.9869 535 - 0.4027 0.3779 0.3891 0.4015 0.3999
14.1176 540 0.0047 - - - - -
14.3791 550 0.0043 - - - - -
14.6405 560 0.0038 - - - - -
14.9020 570 0.0034 - - - - -
14.9804 573 - 0.3954 0.3734 0.3875 0.3982 0.3962
15.1634 580 0.0037 - - - - -
15.4248 590 0.0039 - - - - -
15.6863 600 0.0034 - - - - -
15.9477 610 0.0033 - - - - -
16.0 612 - 0.3966 0.3720 0.3852 0.3948 0.3936
16.2092 620 0.0038 - - - - -
16.4706 630 0.0034 - - - - -
16.7320 640 0.0029 - - - - -
16.9935 650 0.0033 0.3968 0.3723 0.3844 0.3977 0.3966
17.2549 660 0.0034 - - - - -
17.5163 670 0.0033 - - - - -
17.7778 680 0.0028 - - - - -
17.9869 688 - 0.3965 0.3695 0.3861 0.3960 0.3969
18.0392 690 0.0033 - - - - -
18.3007 700 0.0033 - - - - -
18.5621 710 0.0036 - - - - -
18.8235 720 0.0026 - - - - -
18.9804 726 - 0.3962 0.3701 0.3819 0.3951 0.3964
19.0850 730 0.003 - - - - -
19.3464 740 0.0036 - - - - -
19.6078 750 0.0033 - - - - -
19.8693 760 0.0031 0.3994 0.3789 0.3896 0.4065 0.4018
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
20
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sylvain471/bl_ademe_large

Finetuned
(1)
this model

Evaluation results