|
--- |
|
license: mit |
|
datasets: |
|
- squad |
|
- eli5 |
|
- sentence-transformers/embedding-training-data |
|
- sentence-transformers/gooaq |
|
- KennethTM/squad_pairs_danish |
|
- KennethTM/eli5_question_answer_danish |
|
- KennethTM/gooaq_pairs_danish |
|
language: |
|
- da |
|
--- |
|
|
|
# Note |
|
|
|
*This an updated version of [KennethTM/MiniLM-L6-danish-reranker](https://huggingface.co/KennethTM/MiniLM-L6-danish-reranker). This version is just trained on more data ([GooAQ dataset](https://huggingface.co/datasets/sentence-transformers/gooaq) translated to [Danish](https://huggingface.co/datasets/KennethTM/gooaq_pairs_danish)) and is otherwise the same* |
|
|
|
# MiniLM-L6-danish-reranker-v2 |
|
|
|
This is a lightweight (~22 M parameters) [sentence-transformers](https://www.SBERT.net) model for Danish NLP: It takes two sentences as input and outputs a relevance score. Therefore, the model can be used for information retrieval, e.g. given a query and candidate matches, rank the candidates by their relevance. |
|
|
|
The maximum sequence length is 512 tokens (for both passages). |
|
|
|
The model was not pre-trained from scratch but adapted from the English version of [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) with a [Danish tokenizer](https://huggingface.co/KennethTM/bert-base-uncased-danish). |
|
|
|
Trained on ELI5 and SQUAD data machine translated from English to Danish. |
|
|
|
## Usage with Transformers |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
import torch |
|
import torch.nn.functional as F |
|
|
|
model = AutoModelForSequenceClassification.from_pretrained('KennethTM/MiniLM-L6-danish-reranker-v2') |
|
tokenizer = AutoTokenizer.from_pretrained('KennethTM/MiniLM-L6-danish-reranker-v2') |
|
|
|
# Two examples where the first is a positive example and the second is a negative example |
|
queries = ['Kører der cykler på vejen?', |
|
'Kører der cykler på vejen?'] |
|
passages = ['I Danmark er cykler et almindeligt transportmiddel, og de har lige så stor ret til at bruge vejene som bilister. Cyklister skal dog følge færdselsreglerne og vise hensyn til andre trafikanter.', |
|
'Solen skinner, og himlen er blå. Der er ingen vind, og temperaturen er perfekt. Det er den perfekte dag til at tage en tur på landet og nyde den friske luft.'] |
|
|
|
features = tokenizer(queries, passages, padding=True, truncation=True, return_tensors="pt") |
|
|
|
model.eval() |
|
with torch.no_grad(): |
|
scores = model(**features).logits |
|
|
|
# The scores are raw logits, these can be transformed into probabilities using the sigmoid function |
|
# Higher values are higher relevance |
|
print(scores) |
|
print(F.sigmoid(scores)) |
|
``` |
|
|
|
## Usage with SentenceTransformers |
|
|
|
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: |
|
```python |
|
from sentence_transformers import CrossEncoder |
|
import numpy as np |
|
|
|
sigmoid_numpy = lambda x: 1/(1 + np.exp(-x)) |
|
|
|
# Provide examples as a list of query-passage tuples |
|
pairs = [('Kører der cykler på vejen?', |
|
'I Danmark er cykler et almindeligt transportmiddel, og de har lige så stor ret til at bruge vejene som bilister. Cyklister skal dog følge færdselsreglerne og vise hensyn til andre trafikanter.'), |
|
('Kører der cykler på vejen?', |
|
'Solen skinner, og himlen er blå. Der er ingen vind, og temperaturen er perfekt. Det er den perfekte dag til at tage en tur på landet og nyde den friske luft.')] |
|
|
|
model = CrossEncoder('KennethTM/MiniLM-L6-danish-reranker-v2', max_length=512) |
|
scores = model.predict(pairs) |
|
|
|
print(scores) |
|
print(sigmoid_numpy(scores)) |
|
|
|
``` |