---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:26
- loss:MultipleNegativesRankingLoss
- loss:MatryoshkaLoss
widget:
- source_sentence: The Employee agrees to diligently, honestly, and to the best of
their abilities, perform all
sentences:
- What are the Payment Terms for the Batteries?
- What are the general obligations of the Employee?
- according to the MOU?
- source_sentence: The Company has employed the Employee to render services as described
herein from the
sentences:
- order?
- When does the Company employ the Employee?
- What is the Delivery Schedule for the Batteries?
- source_sentence: The Employee agrees to be employed on the terms and conditions
set out in this Agreement.
sentences:
- What is the term of the Agreement?
- What are the specific terms and conditions of employment?
- single order?
- source_sentence: The Supplier warrants that the Batteries shall be free from defects
in materials and
sentences:
- What is the pricing per Battery under this Agreement?
- When does the Employee commence employment with the Employer?
- What warranties are provided by the Supplier for the Batteries?
- source_sentence: The Employee agrees to abide by the Employer’s rules, regulations,
guidelines, policies, and
sentences:
- Which law governs this Agreement, and where would disputes be resolved?
- What are the initial job title and duties of the Employee?
- What rules and policies must the Employee abide by?
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("vineet10/new_model")
# Run inference
sentences = [
'The Employee agrees to abide by the Employer’s rules, regulations, guidelines, policies, and',
'What rules and policies must the Employee abide by?',
'What are the initial job title and duties of the Employee?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 26 training samples
* Columns: context
and question
* Approximate statistics based on the first 1000 samples:
| | context | question |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details |
Force majeure events include acts of God, war, terrorism, strikes, labor disputes, natural
| What events constitute Force Majeure under this Agreement?
|
| This Agreement commences on April 1, 2023, and terminates on April 1, 2024.
| When does this Agreement terminate?
|
|
| Babbar?
|
* Loss: [MatryoshkaLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 3 evaluation samples
* Columns: question
, context
, and id
* Approximate statistics based on the first 1000 samples:
| | question | context | id |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------|
| type | string | string | int |
| details | What financial settlement does Deepak Babbar agree to in the MOU?
| Answer: Deepak Babbar agrees to pay Rs 5,10,000 as a full and final settlement to Ayushi
| 15
|
| What are the duties of the Employee?
| The Employee will perform any and all duties as required by the Company that are
| 7
|
| MOU?
| Answer: Deepak Babbar makes the final payment of Rs 2,60,000 at the time of quashing FIR
| 17
|
* Loss: [MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters