SentenceTransformer based on vinai/phobert-base

This is a sentence-transformers model finetuned from vinai/phobert-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: vinai/phobert-base
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("trongvox/phobert-semactic-retrival-food-2")
# Run inference
sentences = [
    'Ốc vú nàng khá quý hiếm, là một trong những món ăn đặc sản ngon nổi tiếng nhất tại Côn Đảo. Ốc vú nàng có vỏ hình chóp lệch, trên đỉnh có một núm nhỏ, vỏ ngoài màu xám đen, mặt trong lấp lánh ánh xà cừ, dùng cát xát vào vỏ thì con ốc sẽ ánh lên một màu hồng sáng, càng lớn thì vỏ ốc có màu hồng càng đậm. \n\nThông thường, ốc vú nàng chỉ to bằng khoảng ba ngón tay người lớn, nhưng ốc vú nàng ở Côn Đảo có thể to gần bằng bàn tay. Ốc vú nàng có vô vàn cách chế biến và cách thưởng thức khác nhau như luộc, làm gỏi, hấp... nhưng được ưa thích nhất là nướng. Dù chế biến theo cách nào thì cũng  đều mang một hương vị thơm ngon riêng biệt không lẫn với bất kỳ loại ốc nào. Loại ốc vú nàng này được ngư dân Côn Đảo khai thác chủ yếu tại khu vực Hòn Tài, Hòn Trác và luôn giữ được độ tươi khi đưa vào chế biến.',
    'Ốc vú nàng',
    'Trứng chiên thịt băm',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 37,424 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 71 tokens
    • mean: 127.52 tokens
    • max: 128 tokens
    • min: 3 tokens
    • mean: 7.36 tokens
    • max: 21 tokens
    • min: 0.0
    • mean: 0.5
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    Vua ngon, vua re, vua bo la 3 the manh cua mon an nay trong bua com sinh vien. Phan da, mon an nay mot tuan xuat hien tu 5 den 7 lan mot tuan trong mam com cua sinh vien vi no don gian, de nau va cung kha day du dinh duong. Do nuoc sot ca chua len dau hu vua ran rac hanh la va tieu len la an vo cung bat com. Day la mon vua ngon, vua re an cung com nong. Rieng mon an ve dau, ban co the tha ho sang tao va che bien. Dau hu mang rat nhieu chat dinh duong, va dac biet la dam khi che bien cung ca chua se rat phu hop. Mon an nay tuy re nhung cung rat bo do.

    Nguyen lieu:
    Dau hu Ca chua Toi, hanh la
    Cach thuc hien:
    Cho dau an vao chao, dun den khi dau gia thi cho dau vao ran deu cac mat.Phi thom toi roi cho ca chua cung voi chut nuoc soi vao.Xao ca chua den khi mem thi cho dau hu da ran vao.Them nem gia vi sao cho vua mieng roi dun den khi thay nuoc ca chua sen set thi cho chem chut hanh la vao roi tat bep.
    Đậu hũ sốt cà chua 1.0
    Du troi nang hay mua, mua dong gia ret hay mua he nong nuc thi mon kem van la mon an "khong the cuong lai duoc" va nhat la loai kem socola. Cach lam kem socola cung kha don gian va ban co the tu lam tai nha.Nguyen lieu:Whipping cream: 400 gSua dac: 140 gBot ca cao nguyen chat: 60 gHop dung, pho danh trung hoac may danh trungCach lam:Dau tien cho 400g whipping da duoc lam lanh ra to lon. Dung phoi hoac may danh trung danh den khi whipping chuyen trang thai bong mem. Nhat phoi len tao chop hoi quap xuong.Cho 140g sua dac va 60g bot ca cao nguyen chat vao whipping. Dung phoi long tron deu hon hop len. Luu y dung tron qua lau se khien hon hop bi tach nuoc. Khi hon hop deu, kha dac va sanh min thi dat.Sau khi tron deu hon hop, ban cho hon hop vao khuon, khay hay to, roi dung phoi dan cho deu hon hop. Sau do, dung mang boc thuc pham boc kin be mat lop kem lai roi cho vao ngan da tu lanh khoang 4 tieng dong ho.Sau 4 tieng lay kem ra va thuong thuc. De tang them mui vi, ban co the an kem kem v... Kem socola 1.0
    Nguyen lieu:
    500 gr suon non3 nhanh hanh la thai khuc dai5 tep toi1 muong canh ruou3 muong canh nuoc tuong = xi dau1 muong canh duong1/2 muong ca phe tieu den1 muong canh nuoc mam1 trai ot sung700 ml nuoc xuong ga hay nuoc lanh.
    Cach che bien:
    Dau tien khi suon non mua ve cac ban lay suon chat mieng vua an, sau do rua suon qua nuoc co pha muoi roi xa nuoc lanh that sach.Sau do cac ban bat chao len bep, cho vao 2 muong canh dau, cho dau hoi nong cho hanh la vao xao 1 phut, ke den cho toi vao xao them 1 phut nua. Tiep theo cho suon vao vao cho that san roi moi cho nuoc tuong vao xao 4 phut phut nua. Cuoi cung cho nuoc + ruou va tat ca cac gia vi con lai vao, day nap ham voi lua vua. Khi nuoc hoi sanh lai thi nem nem lai cho vua an la tat bep.Thanh pham va trinh bay: Cho suon kho tau ra dia, trai ot de len hay mot chut la mui ta (ngo) va rac chut hat tieu len tren cho hap dan va day vi nhe.
    Sườn kho tàu 1.0
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 4
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.2138 500 2.1363
0.4275 1000 1.9874
0.6413 1500 1.9273
0.8551 2000 1.9023
1.0688 2500 1.8001
1.2826 3000 1.6671
1.4964 3500 1.6611
1.7101 4000 1.6839
1.9239 4500 1.6716
2.1377 5000 1.5615
2.3514 5500 1.4695
2.5652 6000 1.4506
2.7790 6500 1.4754
2.9927 7000 1.4856
3.2065 7500 1.3189
3.4203 8000 1.3134
3.6340 8500 1.3328
3.8478 9000 1.3009

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
8
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for trongvox/phobert-semactic-retrival-food-2

Base model

vinai/phobert-base
Finetuned
(41)
this model