Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,25 @@ library_name: sentence-transformers
|
|
8 |
pipeline_tag: text-classification
|
9 |
tags:
|
10 |
- cross-encoder
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
pipeline_tag: text-classification
|
9 |
tags:
|
10 |
- cross-encoder
|
11 |
+
---
|
12 |
+
|
13 |
+
# Cross-Encoder for STSB-Multi
|
14 |
+
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
|
15 |
+
The original model is [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased).
|
16 |
+
|
17 |
+
## Training Data
|
18 |
+
This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark), in particular the italian translation. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
|
19 |
+
|
20 |
+
|
21 |
+
## Usage and Performance
|
22 |
+
|
23 |
+
Pre-trained models can be used like this:
|
24 |
+
```
|
25 |
+
from sentence_transformers import CrossEncoder
|
26 |
+
model = CrossEncoder('model_name')
|
27 |
+
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
|
28 |
+
```
|
29 |
+
|
30 |
+
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
|
31 |
+
|
32 |
+
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
|