SentenceTransformer based on ai-forever/ru-en-RoSBERTa
This is a sentence-transformers model finetuned from ai-forever/ru-en-RoSBERTa on the match-pairs and clusters datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: ai-forever/ru-en-RoSBERTa
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
- Training Datasets:
- match-pairs
- clusters
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("poc-embeddings/ru-en-RoSBERTa-trade-magnet")
# Run inference
sentences = [
'Продаю попугайчика, очень веселый, 5 лет, цена 3000 рублей.',
'Нужен попугай, люблю этих птичек, вдруг кто-то продает.',
'Ищу недорогой автомобиль для поездок по деревне, бюджет до 200 тысяч.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Datasets
match-pairs
- Dataset: match-pairs
- Size: 536 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 536 samples:
anchor positive type string string details - min: 9 tokens
- mean: 22.15 tokens
- max: 61 tokens
- min: 5 tokens
- mean: 18.15 tokens
- max: 40 tokens
- Samples:
anchor positive Ищу работу HR-менеджера, опыт 4 года, знание трудового законодательства.
Требуется HR-менеджер с опытом работы и знанием трудового законодательства.
Акция на косметику, 3 по цене 2, только до конца недели!
Кто видел скидки на косметику в последних рекламках?
Продам ковер ручной работы из шерсти, из Ирана, размер 2х3 метра, состояние отличное, покупался за 150 тысяч, отдам за 100 тысяч.
Ищу ковер из натуральных материалов, размер 2х3 метра, в хорошем состоянии, по адекватной цене.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
clusters
- Dataset: clusters
- Size: 19,452 training samples
- Columns:
sentence
andlabel
- Approximate statistics based on the first 1000 samples:
sentence label type string int details - min: 5 tokens
- mean: 49.97 tokens
- max: 128 tokens
- 0: ~2.30%
- 1: ~0.70%
- 2: ~0.60%
- 3: ~1.40%
- 4: ~0.20%
- 5: ~0.50%
- 6: ~1.60%
- 7: ~7.30%
- 8: ~0.50%
- 9: ~0.90%
- 10: ~0.40%
- 11: ~13.40%
- 12: ~0.70%
- 13: ~1.10%
- 14: ~1.60%
- 15: ~3.80%
- 16: ~2.70%
- 17: ~1.70%
- 18: ~3.40%
- 19: ~0.70%
- 20: ~1.20%
- 21: ~1.00%
- 22: ~2.70%
- 23: ~3.80%
- 24: ~4.20%
- 25: ~1.10%
- 26: ~4.00%
- 27: ~0.70%
- 28: ~1.90%
- 29: ~0.60%
- 30: ~0.90%
- 31: ~5.70%
- 32: ~1.40%
- 33: ~1.60%
- 34: ~0.80%
- 35: ~3.50%
- 36: ~0.50%
- 37: ~0.10%
- 38: ~0.70%
- 39: ~0.40%
- 40: ~0.40%
- 41: ~0.50%
- 42: ~0.10%
- 43: ~1.00%
- 44: ~1.70%
- 45: ~0.40%
- 46: ~1.10%
- 47: ~0.70%
- 48: ~0.70%
- 49: ~1.10%
- 50: ~0.50%
- 51: ~0.20%
- 52: ~0.50%
- 53: ~0.80%
- 54: ~0.70%
- 55: ~0.80%
- 56: ~0.20%
- 57: ~0.70%
- 58: ~0.20%
- 59: ~0.40%
- 60: ~0.30%
- 61: ~0.40%
- 63: ~0.80%
- 64: ~0.20%
- 65: ~0.90%
- 66: ~0.20%
- 67: ~0.20%
- 68: ~0.20%
- 69: ~0.10%
- 70: ~0.20%
- 71: ~0.20%
- 73: ~0.50%
- 74: ~0.10%
- 75: ~0.20%
- 76: ~0.20%
- 78: ~0.30%
- Samples:
sentence label Продам кроссовки New Balance 574 Новые. Размер: 9 US, 42.5 EU Цена: 250 лари Больше моделей в шапке профиля.
31
Куплю Новый MagicQ MQ250M
27
КУПЛЮ iPhone 6s, 7, 8 возможно с дефектом‼️
15
- Loss:
BatchHardSoftMarginTripletLoss
Evaluation Datasets
match-pairs
- Dataset: match-pairs
- Size: 536 evaluation samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 536 samples:
anchor positive type string string details - min: 11 tokens
- mean: 21.78 tokens
- max: 45 tokens
- min: 6 tokens
- mean: 16.61 tokens
- max: 39 tokens
- Samples:
anchor positive Отдам бульдозер Komatsu, почти новый, Ростов-на-Дону, 4 млн рублей.
Кто продает бульдозер Komatsu в Ростове-на-Дону?
Нужен PHP-разработчик, удаленка, ЗП до 150к.
Ищу работу как разработчик на PHP, можно удаленно, зарплата от 100к.
Ищу программиста Python, нужен опытный человек, чтобы сделать сайт для компании, пишите в личку, обсудим детали.
Программист python, опыт работы 2 года.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
clusters
- Dataset: clusters
- Size: 19,452 evaluation samples
- Columns:
sentence
andlabel
- Approximate statistics based on the first 1000 samples:
sentence label type string int details - min: 5 tokens
- mean: 48.11 tokens
- max: 128 tokens
- 0: ~1.80%
- 1: ~0.60%
- 2: ~0.50%
- 3: ~0.40%
- 4: ~0.60%
- 5: ~0.60%
- 6: ~1.80%
- 7: ~5.40%
- 8: ~0.30%
- 9: ~1.00%
- 10: ~0.40%
- 11: ~13.70%
- 12: ~0.70%
- 13: ~1.30%
- 14: ~1.60%
- 15: ~3.70%
- 16: ~2.60%
- 17: ~1.90%
- 18: ~3.60%
- 19: ~0.30%
- 20: ~0.80%
- 21: ~1.20%
- 22: ~2.70%
- 23: ~3.20%
- 24: ~5.30%
- 25: ~0.40%
- 26: ~4.10%
- 27: ~0.80%
- 28: ~2.00%
- 29: ~0.80%
- 30: ~0.70%
- 31: ~7.40%
- 32: ~1.20%
- 33: ~1.30%
- 34: ~0.80%
- 35: ~2.80%
- 36: ~0.50%
- 37: ~0.60%
- 38: ~0.30%
- 39: ~0.10%
- 40: ~0.80%
- 41: ~1.20%
- 42: ~0.40%
- 43: ~0.80%
- 44: ~2.10%
- 45: ~0.60%
- 46: ~0.50%
- 47: ~0.70%
- 48: ~0.60%
- 49: ~0.40%
- 50: ~0.90%
- 51: ~0.20%
- 52: ~0.60%
- 53: ~1.00%
- 54: ~1.10%
- 55: ~0.80%
- 56: ~0.30%
- 57: ~0.80%
- 58: ~0.30%
- 59: ~0.50%
- 60: ~0.30%
- 61: ~0.10%
- 62: ~0.30%
- 63: ~0.70%
- 64: ~0.50%
- 65: ~0.30%
- 66: ~0.60%
- 67: ~0.50%
- 68: ~0.10%
- 69: ~0.30%
- 70: ~0.20%
- 71: ~0.40%
- 72: ~0.10%
- 73: ~0.20%
- 74: ~0.10%
- 75: ~0.10%
- 76: ~0.40%
- 77: ~0.10%
- 78: ~0.30%
- Samples:
sentence label Куплю клетчатую сумку с замком, либо подобную, пишите в лс
1
asus r 752 l - 1tb HDD, 12gb ddr3, nvidia GeForce 940, intel core i7 5500u - 550 лари.
14
срочно Продам геймпад Defender X7 с держателем для телефона Состояние - новый 1300р.
15
- Loss:
BatchHardSoftMarginTripletLoss
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 128per_device_eval_batch_size
: 128learning_rate
: 2e-05weight_decay
: 0.022num_train_epochs
: 5lr_scheduler_type
: cosinewarmup_ratio
: 0.17fp16
: Truedataloader_num_workers
: 8
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 128per_device_eval_batch_size
: 128per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.022adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.17warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 8dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | match-pairs loss | clusters loss |
---|---|---|---|---|
0.3546 | 50 | 0.6678 | 1.0062 | 0.6543 |
0.7092 | 100 | 0.7114 | 0.7569 | 0.6323 |
1.0638 | 150 | 0.6571 | 0.7267 | 0.6181 |
1.4184 | 200 | 0.6263 | 0.9529 | 0.6057 |
1.7730 | 250 | 0.6396 | 0.9458 | 0.5934 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
BatchHardSoftMarginTripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 34
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.