Part of BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: [email protected]
How to use:
With sentence transformers:
from sentence_transformers import CrossEncoder
model_path = "clarin-knext/herbert-base-reranker-msmarco"
model = CrossEncoder(model_path, max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
With transformers:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_path = "clarin-knext/herbert-base-reranker-msmarco"
model = AutoModelForSequenceClassification.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
features = tokenizer(['Jakie miasto jest stolica Polski?', 'Stolicą Polski jest Warszawa.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
- Downloads last month
- 23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.