Edit model card

SentenceTransformer based on ai-forever/ru-en-RoSBERTa

This is a sentence-transformers model finetuned from ai-forever/ru-en-RoSBERTa on the match-pairs and clusters datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: ai-forever/ru-en-RoSBERTa
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Datasets:
    • match-pairs
    • clusters

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("poc-embeddings/ru-en-RoSBERTa-trade-magnet")
# Run inference
sentences = [
    'Продаю попугайчика, очень веселый, 5 лет, цена 3000 рублей.',
    'Нужен попугай, люблю этих птичек, вдруг кто-то продает.',
    'Ищу недорогой автомобиль для поездок по деревне, бюджет до 200 тысяч.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Datasets

match-pairs

  • Dataset: match-pairs
  • Size: 536 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 536 samples:
    anchor positive
    type string string
    details
    • min: 9 tokens
    • mean: 22.15 tokens
    • max: 61 tokens
    • min: 5 tokens
    • mean: 18.15 tokens
    • max: 40 tokens
  • Samples:
    anchor positive
    Ищу работу HR-менеджера, опыт 4 года, знание трудового законодательства. Требуется HR-менеджер с опытом работы и знанием трудового законодательства.
    Акция на косметику, 3 по цене 2, только до конца недели! Кто видел скидки на косметику в последних рекламках?
    Продам ковер ручной работы из шерсти, из Ирана, размер 2х3 метра, состояние отличное, покупался за 150 тысяч, отдам за 100 тысяч. Ищу ковер из натуральных материалов, размер 2х3 метра, в хорошем состоянии, по адекватной цене.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

clusters

  • Dataset: clusters
  • Size: 19,452 training samples
  • Columns: sentence and label
  • Approximate statistics based on the first 1000 samples:
    sentence label
    type string int
    details
    • min: 5 tokens
    • mean: 49.97 tokens
    • max: 128 tokens
    • 0: ~2.30%
    • 1: ~0.70%
    • 2: ~0.60%
    • 3: ~1.40%
    • 4: ~0.20%
    • 5: ~0.50%
    • 6: ~1.60%
    • 7: ~7.30%
    • 8: ~0.50%
    • 9: ~0.90%
    • 10: ~0.40%
    • 11: ~13.40%
    • 12: ~0.70%
    • 13: ~1.10%
    • 14: ~1.60%
    • 15: ~3.80%
    • 16: ~2.70%
    • 17: ~1.70%
    • 18: ~3.40%
    • 19: ~0.70%
    • 20: ~1.20%
    • 21: ~1.00%
    • 22: ~2.70%
    • 23: ~3.80%
    • 24: ~4.20%
    • 25: ~1.10%
    • 26: ~4.00%
    • 27: ~0.70%
    • 28: ~1.90%
    • 29: ~0.60%
    • 30: ~0.90%
    • 31: ~5.70%
    • 32: ~1.40%
    • 33: ~1.60%
    • 34: ~0.80%
    • 35: ~3.50%
    • 36: ~0.50%
    • 37: ~0.10%
    • 38: ~0.70%
    • 39: ~0.40%
    • 40: ~0.40%
    • 41: ~0.50%
    • 42: ~0.10%
    • 43: ~1.00%
    • 44: ~1.70%
    • 45: ~0.40%
    • 46: ~1.10%
    • 47: ~0.70%
    • 48: ~0.70%
    • 49: ~1.10%
    • 50: ~0.50%
    • 51: ~0.20%
    • 52: ~0.50%
    • 53: ~0.80%
    • 54: ~0.70%
    • 55: ~0.80%
    • 56: ~0.20%
    • 57: ~0.70%
    • 58: ~0.20%
    • 59: ~0.40%
    • 60: ~0.30%
    • 61: ~0.40%
    • 63: ~0.80%
    • 64: ~0.20%
    • 65: ~0.90%
    • 66: ~0.20%
    • 67: ~0.20%
    • 68: ~0.20%
    • 69: ~0.10%
    • 70: ~0.20%
    • 71: ~0.20%
    • 73: ~0.50%
    • 74: ~0.10%
    • 75: ~0.20%
    • 76: ~0.20%
    • 78: ~0.30%
  • Samples:
    sentence label
    Продам кроссовки New Balance 574 Новые. Размер: 9 US, 42.5 EU Цена: 250 лари Больше моделей в шапке профиля. 31
    Куплю Новый MagicQ MQ250M 27
    КУПЛЮ iPhone 6s, 7, 8 возможно с дефектом‼️ 15
  • Loss: BatchHardSoftMarginTripletLoss

Evaluation Datasets

match-pairs

  • Dataset: match-pairs
  • Size: 536 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 536 samples:
    anchor positive
    type string string
    details
    • min: 11 tokens
    • mean: 21.78 tokens
    • max: 45 tokens
    • min: 6 tokens
    • mean: 16.61 tokens
    • max: 39 tokens
  • Samples:
    anchor positive
    Отдам бульдозер Komatsu, почти новый, Ростов-на-Дону, 4 млн рублей. Кто продает бульдозер Komatsu в Ростове-на-Дону?
    Нужен PHP-разработчик, удаленка, ЗП до 150к. Ищу работу как разработчик на PHP, можно удаленно, зарплата от 100к.
    Ищу программиста Python, нужен опытный человек, чтобы сделать сайт для компании, пишите в личку, обсудим детали. Программист python, опыт работы 2 года.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

clusters

  • Dataset: clusters
  • Size: 19,452 evaluation samples
  • Columns: sentence and label
  • Approximate statistics based on the first 1000 samples:
    sentence label
    type string int
    details
    • min: 5 tokens
    • mean: 48.11 tokens
    • max: 128 tokens
    • 0: ~1.80%
    • 1: ~0.60%
    • 2: ~0.50%
    • 3: ~0.40%
    • 4: ~0.60%
    • 5: ~0.60%
    • 6: ~1.80%
    • 7: ~5.40%
    • 8: ~0.30%
    • 9: ~1.00%
    • 10: ~0.40%
    • 11: ~13.70%
    • 12: ~0.70%
    • 13: ~1.30%
    • 14: ~1.60%
    • 15: ~3.70%
    • 16: ~2.60%
    • 17: ~1.90%
    • 18: ~3.60%
    • 19: ~0.30%
    • 20: ~0.80%
    • 21: ~1.20%
    • 22: ~2.70%
    • 23: ~3.20%
    • 24: ~5.30%
    • 25: ~0.40%
    • 26: ~4.10%
    • 27: ~0.80%
    • 28: ~2.00%
    • 29: ~0.80%
    • 30: ~0.70%
    • 31: ~7.40%
    • 32: ~1.20%
    • 33: ~1.30%
    • 34: ~0.80%
    • 35: ~2.80%
    • 36: ~0.50%
    • 37: ~0.60%
    • 38: ~0.30%
    • 39: ~0.10%
    • 40: ~0.80%
    • 41: ~1.20%
    • 42: ~0.40%
    • 43: ~0.80%
    • 44: ~2.10%
    • 45: ~0.60%
    • 46: ~0.50%
    • 47: ~0.70%
    • 48: ~0.60%
    • 49: ~0.40%
    • 50: ~0.90%
    • 51: ~0.20%
    • 52: ~0.60%
    • 53: ~1.00%
    • 54: ~1.10%
    • 55: ~0.80%
    • 56: ~0.30%
    • 57: ~0.80%
    • 58: ~0.30%
    • 59: ~0.50%
    • 60: ~0.30%
    • 61: ~0.10%
    • 62: ~0.30%
    • 63: ~0.70%
    • 64: ~0.50%
    • 65: ~0.30%
    • 66: ~0.60%
    • 67: ~0.50%
    • 68: ~0.10%
    • 69: ~0.30%
    • 70: ~0.20%
    • 71: ~0.40%
    • 72: ~0.10%
    • 73: ~0.20%
    • 74: ~0.10%
    • 75: ~0.10%
    • 76: ~0.40%
    • 77: ~0.10%
    • 78: ~0.30%
  • Samples:
    sentence label
    Куплю клетчатую сумку с замком, либо подобную, пишите в лс 1
    asus r 752 l - 1tb HDD, 12gb ddr3, nvidia GeForce 940, intel core i7 5500u - 550 лари. 14
    срочно Продам геймпад Defender X7 с держателем для телефона Состояние - новый 1300р. 15
  • Loss: BatchHardSoftMarginTripletLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • learning_rate: 2e-05
  • weight_decay: 0.022
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.17
  • fp16: True
  • dataloader_num_workers: 8

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.022
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.17
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 8
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss match-pairs loss clusters loss
0.3546 50 0.6678 1.0062 0.6543
0.7092 100 0.7114 0.7569 0.6323
1.0638 150 0.6571 0.7267 0.6181
1.4184 200 0.6263 0.9529 0.6057
1.7730 250 0.6396 0.9458 0.5934

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.0
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

BatchHardSoftMarginTripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
34
Safetensors
Model size
405M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for poc-embeddings/ru-en-RoSBERTa-trade-magnet

Finetuned
(1)
this model