bge-base-en-v1.5-sparse
Usage
This is the sparse ONNX variant of the bge-small-en-v1.5 embeddings model accelerated with Sparsify for quantization/pruning and DeepSparseSentenceTransformers for inference.
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('neuralmagic/bge-base-en-v1.5-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 396
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Spaces using neuralmagic/bge-base-en-v1.5-sparse 2
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported75.388
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported38.806
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported69.529
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported90.728
- ap on MTEB AmazonPolarityClassificationtest set self-reported87.079
- f1 on MTEB AmazonPolarityClassificationtest set self-reported90.710
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported45.494
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported44.918
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported46.505
- v_measure on MTEB ArxivClusteringS2Stest set self-reported40.080