SentenceTransformer based on lufercho/ArxBert-MLM

This is a sentence-transformers model finetuned from lufercho/ArxBert-MLM. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: lufercho/ArxBert-MLM
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Approximation of the distribution of a stationary Markov process with\n  application to option pricing',
    "  We build a sequence of empirical measures on the space D(R_+,R^d) of\nR^d-valued c\\`adl\\`ag functions on R_+ in order to approximate the law of a\nstationary R^d-valued Markov and Feller process (X_t). We obtain some general\nresults of convergence of this sequence. Then, we apply them to Brownian\ndiffusions and solutions to L\\'evy driven SDE's under some Lyapunov-type\nstability assumptions. As a numerical application of this work, we show that\nthis procedure gives an efficient way of option pricing in stochastic\nvolatility models.\n",
    "  We provide a new estimate of the local supermassive black hole mass function\nusing (i) the empirical relation between supermassive black hole mass and the\nSersic index of the host spheroidal stellar system and (ii) the measured\n(spheroid) Sersic indices drawn from 10k galaxies in the Millennium Galaxy\nCatalogue. The observational simplicity of our approach, and the direct\nmeasurements of the black hole predictor quantity, i.e. the Sersic index, for\nboth elliptical galaxies and the bulges of disc galaxies makes it\nstraightforward to estimate accurate black hole masses in early- and late-type\ngalaxies alike. We have parameterised the supermassive black hole mass function\nwith a Schechter function and find, at the low-mass end, a logarithmic slope\n(1+alpha) of ~0.7 for the full galaxy sample and ~1.0 for the early-type galaxy\nsample. Considering spheroidal stellar systems brighter than M_B = -18 mag, and\nintegrating down to black hole masses of 10^6 M_sun, we find that the local\nmass density of supermassive black holes in early-type galaxies rho_{bh,\nearly-type} = (3.5+/-1.2) x 10^5 h^3_{70} M_sun Mpc^{-3}, and in late-type\ngalaxies rho_{bh, late-type} = (1.0+/-0.5) x 10^5 h^3_{70} M_sun Mpc^{-3}. The\nuncertainties are derived from Monte Carlo simulations which include\nuncertainties in the M_bh-n relation, the catalogue of Sersic indices, the\ngalaxy weights and Malmquist bias. The combined, cosmological, supermassive\nblack hole mass density is thus Omega_{bh, total} = (3.2+/-1.2) x 10^{-6} h_70.\nThat is, using a new and independent method, we conclude that (0.007+/-0.003)\nh^3_{70} per cent of the universe's baryons are presently locked up in\nsupermassive black holes at the centres of galaxies.\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 500 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 500 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 6 tokens
    • mean: 16.92 tokens
    • max: 51 tokens
    • min: 10 tokens
    • mean: 175.28 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    Lifetime of doubly charmed baryons In this work, we evaluate the lifetimes of the doubly charmed baryons
    $\Xi_{cc}^{+}$, $\Xi_{cc}^{++}$ and $\Omega_{cc}^{+}$. We carefully calculate
    the non-spectator contributions at the quark level where the Cabibbo-suppressed
    diagrams are also included. The hadronic matrix elements are evaluated in the
    simple non-relativistic harmonic oscillator model. Our numerical results are
    generally consistent with that obtained by other authors who used the diquark
    model. However, all the theoretical predictions on the lifetimes are one order
    larger than the upper limit set by the recent SELEX measurement. This
    discrepancy would be clarified by the future experiment, if more accurate
    experiment still confirms the value of the SELEX collaboration, there must be
    some unknown mechanism to be explored.
    Broadening the Higgs Boson with Right-Handed Neutrinos and a Higher
    Dimension Operator at the Electroweak Scale
    The existence of certain TeV suppressed higher-dimension operators may open
    up new decay channels for the Higgs boson to decay into lighter right-handed
    neutrinos. These channels may dominate over all other channels if the Higgs
    boson is light. For a Higgs boson mass larger than $2 m_W$ the new decays are
    subdominant yet still of interest. The right-handed neutrinos have macroscopic
    decay lengths and decay mostly into final states containing leptons and quarks.
    A distinguishing collider signature of this scenario is a pair of displaced
    vertices violating lepton number. A general operator analysis is performed
    using the minimal flavor violation hypothesis to illustrate that these novel
    decay processes can occur while remaining consistent with experimental
    constraints on lepton number violating processes. In this context the question
    of whether these new decay modes dominate is found to depend crucially on the
    approximate flavor symmetries of the right-handed neutrinos.
    Infrared Evolution Equations: Method and Applications It is a brief review on composing and solving Infrared Evolution Equations.
    They can be used in order to calculate amplitudes of high-energy reactions in
    different kinematic regions in the double-logarithmic approximation.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.46.2
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
21
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lufercho/ArxvBert-ST_v2

Finetuned
(1)
this model