SentenceTransformer based on lufercho/ArxBert-MLM
This is a sentence-transformers model finetuned from lufercho/ArxBert-MLM. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: lufercho/ArxBert-MLM
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Approximation of the distribution of a stationary Markov process with\n application to option pricing',
" We build a sequence of empirical measures on the space D(R_+,R^d) of\nR^d-valued c\\`adl\\`ag functions on R_+ in order to approximate the law of a\nstationary R^d-valued Markov and Feller process (X_t). We obtain some general\nresults of convergence of this sequence. Then, we apply them to Brownian\ndiffusions and solutions to L\\'evy driven SDE's under some Lyapunov-type\nstability assumptions. As a numerical application of this work, we show that\nthis procedure gives an efficient way of option pricing in stochastic\nvolatility models.\n",
" We provide a new estimate of the local supermassive black hole mass function\nusing (i) the empirical relation between supermassive black hole mass and the\nSersic index of the host spheroidal stellar system and (ii) the measured\n(spheroid) Sersic indices drawn from 10k galaxies in the Millennium Galaxy\nCatalogue. The observational simplicity of our approach, and the direct\nmeasurements of the black hole predictor quantity, i.e. the Sersic index, for\nboth elliptical galaxies and the bulges of disc galaxies makes it\nstraightforward to estimate accurate black hole masses in early- and late-type\ngalaxies alike. We have parameterised the supermassive black hole mass function\nwith a Schechter function and find, at the low-mass end, a logarithmic slope\n(1+alpha) of ~0.7 for the full galaxy sample and ~1.0 for the early-type galaxy\nsample. Considering spheroidal stellar systems brighter than M_B = -18 mag, and\nintegrating down to black hole masses of 10^6 M_sun, we find that the local\nmass density of supermassive black holes in early-type galaxies rho_{bh,\nearly-type} = (3.5+/-1.2) x 10^5 h^3_{70} M_sun Mpc^{-3}, and in late-type\ngalaxies rho_{bh, late-type} = (1.0+/-0.5) x 10^5 h^3_{70} M_sun Mpc^{-3}. The\nuncertainties are derived from Monte Carlo simulations which include\nuncertainties in the M_bh-n relation, the catalogue of Sersic indices, the\ngalaxy weights and Malmquist bias. The combined, cosmological, supermassive\nblack hole mass density is thus Omega_{bh, total} = (3.2+/-1.2) x 10^{-6} h_70.\nThat is, using a new and independent method, we conclude that (0.007+/-0.003)\nh^3_{70} per cent of the universe's baryons are presently locked up in\nsupermassive black holes at the centres of galaxies.\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 500 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 500 samples:
sentence_0 sentence_1 type string string details - min: 6 tokens
- mean: 16.92 tokens
- max: 51 tokens
- min: 10 tokens
- mean: 175.28 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 Lifetime of doubly charmed baryons
In this work, we evaluate the lifetimes of the doubly charmed baryons
$\Xi_{cc}^{+}$, $\Xi_{cc}^{++}$ and $\Omega_{cc}^{+}$. We carefully calculate
the non-spectator contributions at the quark level where the Cabibbo-suppressed
diagrams are also included. The hadronic matrix elements are evaluated in the
simple non-relativistic harmonic oscillator model. Our numerical results are
generally consistent with that obtained by other authors who used the diquark
model. However, all the theoretical predictions on the lifetimes are one order
larger than the upper limit set by the recent SELEX measurement. This
discrepancy would be clarified by the future experiment, if more accurate
experiment still confirms the value of the SELEX collaboration, there must be
some unknown mechanism to be explored.Broadening the Higgs Boson with Right-Handed Neutrinos and a Higher
Dimension Operator at the Electroweak ScaleThe existence of certain TeV suppressed higher-dimension operators may open
up new decay channels for the Higgs boson to decay into lighter right-handed
neutrinos. These channels may dominate over all other channels if the Higgs
boson is light. For a Higgs boson mass larger than $2 m_W$ the new decays are
subdominant yet still of interest. The right-handed neutrinos have macroscopic
decay lengths and decay mostly into final states containing leptons and quarks.
A distinguishing collider signature of this scenario is a pair of displaced
vertices violating lepton number. A general operator analysis is performed
using the minimal flavor violation hypothesis to illustrate that these novel
decay processes can occur while remaining consistent with experimental
constraints on lepton number violating processes. In this context the question
of whether these new decay modes dominate is found to depend crucially on the
approximate flavor symmetries of the right-handed neutrinos.Infrared Evolution Equations: Method and Applications
It is a brief review on composing and solving Infrared Evolution Equations.
They can be used in order to calculate amplitudes of high-energy reactions in
different kinematic regions in the double-logarithmic approximation. - Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.46.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for lufercho/ArxvBert-ST_v2
Base model
lufercho/ArxBert-MLM